Coder Social home page Coder Social logo

redhat-cop / container-pipelines Goto Github PK

View Code? Open in Web Editor NEW
152.0 28.0 268.0 459 KB

Let's get the ball rolling on some Container-driven CI & CD

License: Other

Groovy 44.95% HTML 45.18% Gherkin 2.25% JavaScript 0.49% Dockerfile 1.78% Shell 5.35%
kubernetes openshift pipeline pipeline-quickstarts container-cop

container-pipelines's Introduction

Validate

Container Pipelines

Let's get the ball rolling on some Container-driven CI & CD

Catalog

The following is a list of the pipeline samples available in this repository:

Makeup of a "Pipeline"

We understand that everyone's definition of a pipeline is a little (maybe a lot) different. Let's talk about what WE mean.

In this context, a pipeline is defined as all of the technical collateral required to take application source code and get it deployed through it's relevant lifecycle environments on an OpenShift cluster (or multiple clusters).

A few guiding principles for a pipeline quickstart in this repo:

  • Everything as code. A pipeline should require as few commands as possible to deploy (We recommend an openshift-applier compatible inventory)
  • Use OpenShift Features. The intention of these quickstarts is to showcase how OpenShift can be used to make pipeline development and management simpler. Use of features like slave pods, webhooks, source to image and the JenkinsPipelineStrategy is highly desired where possible.
  • Sharing is Caring. If there are things that can be common to multiple pipelines (templates, builder images, etc.), let's refactor to make them shared.

Typically the things required to build a pipeline sample include:

  • Project definitions (each representing a lifecycle environment)
  • A Jenkins Master
  • A jenkinsfile
  • A build template that includes all things necessary to get the source code built into a container image. This means:
    • A JenkinsPipelineStrategy buildConfig, which is used to inject the pipeline into Jenkins automatically
    • A Source strategy binary buildConfig, which is used to build the container image
  • A deployment template that includes all the necessary objects to run the application in an environment. At a minimum:
    • A DeploymentConfig definition
    • A Service definition
  • it might also include:
    • Routes
    • Secrets
    • ConfigMaps
    • StatefulSets
    • etc.

See our basic spring boot example for a very simple reference architecture.

Automated Deployments

These pipeline quickstarts include an Ansible inventory through which they can be automatically deployed and managed using the OpenShift Applier role.

Optional: Use a container that contains ansible

So you don't have to install ansible on your machine. Just type oc run -i -t tool-box-test --image=quay.io/redhat-cop/tool-box --rm bash. More into on the toolbox container can be found at https://github.com/redhat-cop/containers-quickstarts/tree/master/tool-box.

container-pipelines's People

Contributors

alberttwong avatar alyibrahim avatar bparry02 avatar charlbrink avatar cpeters avatar davgordo avatar deweya avatar etsauer avatar fmenesesg avatar garethahealy avatar gautric avatar gl4di4torrr avatar hhellbusch avatar itewk avatar jaredburck avatar kenwilli avatar malacourse avatar mluyo3414 avatar pabrahamsson avatar pcarney8 avatar raffaelespazzoli avatar renovate[bot] avatar sabre1041 avatar schen1 avatar sherl0cks avatar siliconjesus avatar springdo avatar syvanen avatar tinexw avatar tylerauerbeck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

container-pipelines's Issues

Error during building container Image

Hello Team,

I was trying to deploy the application on my openshift cluster but it is failing to build container image and giving me below jenkins error:

ERROR: Error running start-build on at least one item: [buildconfig/basic-spring-boot];
{reference={}, err=Uploading directory "oc-build" as binary input for the build ...
error: The build basic-spring-boot-build/basic-spring-boot-1 status is "Failed", verb=start-build, cmd=oc --server=https://172.30.0.1:443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=basic-spring-boot-build --token=XXXXX start-build buildconfig/basic-spring-boot --from-dir=oc-build --wait -o=name , out=build/basic-spring-boot-1, status=1}

Please advise what is wrong here.

I am trying to run commands manually as mentioned in your Readme file and it is failing while executing build from jenkins.

basic-spring-boot fails to deploy jenkins when context not set to build namespace

In the applier manifest, the jenkins template is deployed, but no namespace is specified:

  - name: "deploy jenkins to build environment"
    template: "openshift//jenkins-ephemeral"
    tags:
      - build

This causes the template deployment to fail if the oc project is set to a deleted project, for example.

$ oc new-project my-newproject
$ oc delete project my-newproject
$ ansible-playbook -i ./.applier/ galaxy/openshift-applier/playbooks/openshift-cluster-seed.yml
...
TASK [openshift-applier : Apply OpenShift objects based on template with params for 'deployments : deploy jenkins to build environment'] ***
failed: [localhost] (item={'oc_path': ''}) => {"ansible_loop_var": "oc_param_file_item", "changed": true, "cmd": "oc process       openshift//jenkins-ephemeral            --ignore-unknown-parameters   | oc apply      -f -      \n", "delta": "0:00:02.483306", "end": "2020-05-07 09:55:17.817910", "failed_when_result": true, "msg": "non-zero return code", "oc_param_file_item": {"oc_path": ""}, "rc": 1, "start": "2020-05-07 09:55:15.334604", "stderr": "error: unable to process template\n  processedtemplates.template.openshift.io is forbidden: User \"bparry\" cannot create resource \"processedtemplates\" in API group \"template.openshift.io\" in the namespace \"my-newproject\"\nerror: no objects passed to apply", "stderr_lines": ["error: unable to process template", "  processedtemplates.template.openshift.io is forbidden: User \"bparry\" cannot create resource \"processedtemplates\" in API group \"template.openshift.io\" in the namespace \"my-newproject\"", "error: no objects passed to apply"], "stdout": "", "stdout_lines": []}

The workaround for this issue is to create the build project manually, before running the applier.

Build failing due to CVE for Spring Fox

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 05:50 min
[INFO] Finished at: 2019-04-22T15:20:58Z
[INFO] Final Memory: 45M/380M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.owasp:dependency-check-maven:5.0.0-M1:check (default) on project shift-rest:
[ERROR]
[ERROR] One or more dependencies were identified with vulnerabilities that have a CVSS score greater than or equal to '4.0':
[ERROR]
[ERROR] springfox-core-2.8.0.jar: CVE-2019-11065
[ERROR]
[ERROR] See the dependency-check report for more details.
[ERROR] -> [Help 1]

How to bypass this?

Need Reference Pipeline for Operators

Currently the cert operator does not have a reference pipeline for building and deploying itself. We should design and build a pipeline and deployment process for operators in general.

Feature Request: Pipeline Enhancements

Some desired features:

@redhat-cop/containerize-it

Fix dead links

We've got some dead links throughout the repo that could use some cleaning up.

Create common integrations with On premise/cloud tools

I'm putting this in a sort of a placeholder to account for being able to connect with credentials/authentication etc to various tools that we see on client site frequently (gitlab, Nexus, Artifactory, etc)
These pipeline are a great starting point, but alot of time is still spent wiring them into specific environments.

Investigate the use of kubeval as part of the CI

Was having a look in the instruments (conftest creators) org, and noticed:

Its a CLI to do schema validation for k8s YAML. Maybe something we can look at for this and other repos?, which contain lots of templates/yaml.

Currently, it wont work with OCP objects - but if people think its a good idea, am sure we can raise some PRs to get the OpenAPI info hooked up into the kubeval repos.

cc @pabrahamsson @etsauer @tylerauerbeck

basic-spring-boot: error verifying deployment when other pods are present.

I am getting this error during the verify step:

Entering watch
Running watch closure body
[Pipeline] {
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] echo
pod: pods/todomvc-11-hvsbc
[Pipeline] readFile
[Pipeline] _OcAction

watch closure threw an exception: "java.lang.NullPointerException: Cannot invoke method getAt() on null object; Cannot invoke method getAt() on null object".

.....
[Pipeline] End of Pipeline
java.lang.NullPointerException: Cannot invoke method getAt() on null object
	at org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:91)
	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.NullCallSite.call(NullCallSite.java:35)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
	at org.kohsuke.groovy.sandbox.impl.Checker$10.call(Checker.java:418)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetArray(Checker.java:420)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getArray(SandboxInvoker.java:45)
	at com.cloudbees.groovy.cps.impl.ArrayAccessBlock.rawGet(ArrayAccessBlock.java:21)
	at WorkflowScript.run(WorkflowScript:118)
	at com.openshift.jenkins.plugins.OpenShiftDSL$OpenShiftResourceSelector.untilEach(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:1114)
	at com.openshift.jenkins.plugins.OpenShiftDSL$OpenShiftResourceSelector.withEach(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:1335)
	at com.openshift.jenkins.plugins.OpenShiftDSL$OpenShiftResourceSelector.untilEach(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:1113)
	at com.openshift.jenkins.plugins.OpenShiftDSL$OpenShiftResourceSelector.watch(jar:file:/var/lib/jenkins/plugins/openshift-client/WEB-INF/lib/openshift-client.jar!/com/openshift/jenkins/plugins/OpenShiftDSL.groovy:1082)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74)
	at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
	at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.fixName(PropertyishBlock.java:66)
	at sun.reflect.GeneratedMethodAccessor144.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122)
	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:35)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:32)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:331)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:82)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:243)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:231)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
Caused: java.util.concurrent.ExecutionException
	at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution.get(CpsBodyExecution.java:297)
	at com.openshift.jenkins.plugins.pipeline.OcWatch$Execution.run(OcWatch.java:159)
	at com.openshift.jenkins.plugins.pipeline.OcWatch$Execution.run(OcWatch.java:79)
	at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
	at hudson.security.ACL.impersonate(ACL.java:260)
	at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE

the only thing that is different from the normal run presented in this example is that extraneous pods exist in the project

oc get pods
NAME                        READY     STATUS    RESTARTS   AGE
todomvc-11-hvsbc            1/1       Running   0          8m
zalenium-40000-96wmx        1/1       Running   0          13h
zalenium-40000-x4k8r        1/1       Running   0          13h
zalenium-7c5f9fbdcd-twzsl   1/1       Running   0          13h

where todomvc is the name of the app being deployed.
To make things more complicated this error is intermittent. It happens about 3 times out of 4.

basic-spring-boot-tekton - Default SA doesn't have edit privileges over deployment namespaces

PipelineRun was failing while deploying to basic-spring-boot-dev due to authorization issues:

Error from server (Forbidden): imagestreams.image.openshift.io "basic-spring-boot" is forbidden: User "system:serviceaccount:basic-spring-boot-build:pipeline" cannot get resource "imagestreams" in API group "image.openshift.io" in the namespace "basic-spring-boot-dev"

After some digging, I noticed the PipelineRuns were running using pipeline as it's service account instead of tekton, where all roleBindings are applied to (as is in this repository).
For reference, I'm installing "Openshift Pipelines Operator", version 1.0.1, via OperatorHub.

Got it deploying correctly by changing the default sa, in config-defaults configMap from openshift-pipelines namespace, to tekton. Alternatively, by giving pipeline sa the necessary privileges makes the deployment successful as well. Not sure what's the best approach here.

Add support for environments behind Proxies

I am trying to deploy the multi cluster pipeline behind a proxy but the build fails as it reaches out to maven repos outside of the network. What configuration would be needed for it to work with proxies?

basic-spring-boot pipeline image build fails when build namespace name changes

When using customized namespace names with the basic-spring-boot pipeline (e.g. bp-basic-spring-boot-build), the Jenkins build fails to find the binary build configuration. It is not using the sb_application_name value defined in the applier configuration.

ERROR: Error running start-build on at least one item: [buildconfig/bp-basic-spring-boot];
{reference={}, err=Uploading directory "oc-build" as binary input for the build ...

Uploading finished
Error from server (NotFound): buildconfigs.build.openshift.io "bp-basic-spring-boot" not found

The Jenkinsfile begins by setting an app name variable that is derived from the Jenkins job name.

openshift.withCluster() {
  env.NAMESPACE = openshift.project()
  env.POM_FILE = env.BUILD_CONTEXT_DIR ? "${env.BUILD_CONTEXT_DIR}/pom.xml" : "pom.xml"
  env.APP_NAME = "${JOB_NAME}".replaceAll(/-build.*/, '')
  echo "Starting Pipeline for ${APP_NAME}..."
  env.BUILD = "${env.NAMESPACE}"
  env.DEV = "${APP_NAME}-dev"
  env.STAGE = "${APP_NAME}-stage"
  env.PROD = "${APP_NAME}-prod"
}

The APP_NAME value becomes set to the part of the build namespace name that precedes -build. This logic assumes that the namespace names are essentially derived from the app name.

The workaround for this issue is to rename the sb_application_name to match the namespace names.

A simple solution for this might be to change the applier configuration to show that the application name and namespace names are related. Something like:

# NOTE: the jenkins pipeline expects the namespace names to be derived from the application name
sb_application_name: basic-spring-boot
sb_build_namespace: "{{ sb_application_name }}-build"
sb_dev_namespace:   "{{ sb_application_name }}-dev"
sb_stage_namespace: "{{ sb_application_name }}-stage"
sb_prod_namespace:  "{{ sb_application_name }}-prod"

What type of permissions are needed to run ansible-playbook

in the tool-box container.

ansible-playbook -i ./.applier/ galaxy/openshift-applier/playbooks/openshift-cluster-seed.yml

output

TASK [openshift-applier : Create OpenShift objects based on static files for 'projects : create environments'] ***
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["oc", "create", "-f", "/home/tool-box/container-pipelines/basic-spring-boot/.applier/../.openshift/projects/projects.yml"], "delta": "0:00:00.502697", "end": "2019-01-25 20:27:05.661199", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2019-01-25 20:27:05.158502", "stderr": "Error from server (Forbidden): projectrequests.project.openshift.io is forbidden: User \"system:serviceaccount:albert:default\" cannot create projectrequests.project.openshift.io at the cluster scope: no RBAC policy matched\nError from server (Forbidden): projectrequests.project.openshift.io is forbidden: User \"system:serviceaccount:albert:default\" cannot create projectrequests.project.openshift.io at the cluster scope: no RBAC policy matched\nError from server (Forbidden): projectrequests.project.openshift.io is forbidden: User \"system:serviceaccount:albert:default\" cannot create projectrequests.project.openshift.io at the cluster scope: no RBAC policy matched\nError from server (Forbidden): projectrequests.project.openshift.io is forbidden: User \"system:serviceaccount:albert:default\" cannot create projectrequests.project.openshift.io at the cluster scope: no RBAC policy matched", "stderr_lines": ["Error from server (Forbidden): projectrequests.project.openshift.io is forbidden: User \"system:serviceaccount:albert:default\" cannot create projectrequests.project.openshift.io at the cluster scope: no RBAC policy matched", "Error from server (Forbidden): projectrequests.project.openshift.io is forbidden: User \"system:serviceaccount:albert:default\" cannot create projectrequests.project.openshift.io at the cluster scope: no RBAC policy matched", "Error from server (Forbidden): projectrequests.project.openshift.io is forbidden: User \"system:serviceaccount:albert:default\" cannot create projectrequests.project.openshift.io at the cluster scope: no RBAC policy matched", "Error from server (Forbidden): projectrequests.project.openshift.io is forbidden: User \"system:serviceaccount:albert:default\" cannot create projectrequests.project.openshift.io at the cluster scope: no RBAC policy matched"], "stdout": "", "stdout_lines": []}
	to retry, use: --limit @/home/tool-box/container-pipelines/basic-spring-boot/galaxy/openshift-applier/playbooks/openshift-cluster-seed.retry

Future proof pipeline scripts

Both blue-green-spring and basic-tomcat Jenkinsfiles will output the warning below.
It should be easy to leverage the pipeline-library to update these scripts, see #32 for examples.

[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Verify Deployment to basic-tomcat-dev)
[Pipeline] openshiftVerifyDeployment

NOTE: steps like this one from the OpenShift Pipeline Plugin will not be supported against OpenShift API Servers later than v3.11

Get this repo under CI

We would like to bake some continuous integration testing into the Pull request process for this repository. There are many people who use this repo, and we want to make sure that our pipelines are in good working order. To that, I think we should look at implementing a similar testing framework as we have for containers-quickstarts

Then we can configure our internal Prow deployment to point at this repo and run/manage the testing.

Instructions are needed on how to create a new project from this template

Part A - Setup the environment

  1. Create new project to hold the toolbox container. oc new-project toolbox
  2. Run oc project toolbox to create a new project.
  3. Run oc run -i -t tool-box--image=quay.io/redhat-cop/tool-box --rm bash to provision the toolbox pod.
  4. Shell into the new toolbox pod oc rsh <toolbox pod>
  5. Clone this repo git clone https://github.com/redhat-cop/container-pipelines.git
  6. Go to the project files cd container-pipelines/basic-spring-boot

Part B - Make new project names and point to a new git repo for code

  1. git fork https://github.com/redhat-cop/spring-rest. eg https://github.com/alberttwong/spring-rest.git. This project is important because it's constructed in a way that will work with the CI/CD and OpenShift.
  2. Update .openshift/projects/projects.yml with new sb_application_repository_url variable from step 1.
  3. Edit config file with new project names. There should be 4 namespace/project name changes in each file and application name in seed-hosts.yml. Files .openshift/projects/projects.yml and .applier/group_vars/seed-hosts.yml

Part C - Deploy the template

  1. Run login <url>
  2. Run ansible-galaxy install -r requirements.yml --roles-path=galaxy in the container-pipelines/basic-spring-boot directory.
  3. Run ansible-playbook -i ./.applier/ galaxy/openshift-applier/playbooks/openshift-cluster-seed.yml in the container-pipelines/basic-spring-boot directory

Part D - Make code changes

  1. Make code change and commit to the git repo that you created. eg https://github.com/alberttwong/spring-rest.git

Part E - Make a new build

You can launch a new build by going to "projectname - BUILD" project and and go to pipeline and click on "Start Pipeline" or for example oc project albert-basic-spring-boot-build and oc start-build albert-basic-spring-boot-pipeline

Part X - I made a mistake or I need to delete the project

  1. You'll need to run this 4 times for each project that was created. oc delete project XXX.

Sonarqube in secure Spring Boot is broken

The sonarqube image that is built as part of securet Spring Boot fails to start due to using upstream sonarqube:latest which is known not to work on Openshift without sysctl changes.

ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Sonar scan fails for secure Spring Boot

The pipeline for secure Spring Boot fails when trying to run Sonarqube static analysis

ERROR: SonarQube installation defined in this job (sonar) does not match any configured installation. Number of installations that can be configured: 1.
If you want to reassign jobs to a different SonarQube installation, check the documentation under https://redirect.sonarsource.com/plugins/jenkins.html
Finished: FAILURE

Basic Spring Boot applier inventory broken for non-privileged users

@JaredBurck looks like a recent addition you push in is causing an issue for users who do not have edit rights to the openshift namespace. The issue appears to be from the imageStream that was introduced in the build template:

https://github.com/redhat-cop/container-pipelines/blob/master/basic-spring-boot/applier/templates/build.yml

It results in this error during ansible run:

fatal: [localhost]: FAILED! => {"changed": true, "cmd": "oc process  --local   -f  /home/esauer/src/container-pipelines/basic-spring-boot/applier/inventory/../templates/build.yml  --param-file=/home/esauer/src/container-pipelines/basic-spring-boot/applier/inventory/../params/build-dev | oc apply  -f - ", "delta": "0:00:01.355133", "end": "2018-05-01 22:34:55.003467", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2018-05-01 22:34:53.648334", "stderr": "Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nError from server (Forbidden): User \"esauer\" cannot patch imagestreams in the namespace \"openshift\": User \"esauer\" cannot \"patch\" \"imagestreams\" with name \"redhat-openjdk18-openshift\" in project \"openshift\" (patch imagestreams redhat-openjdk18-openshift)", "stderr_lines": ["Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply", "Error from server (Forbidden): User \"esauer\" cannot patch imagestreams in the namespace \"openshift\": User \"esauer\" cannot \"patch\" \"imagestreams\" with name \"redhat-openjdk18-openshift\" in project \"openshift\" (patch imagestreams redhat-openjdk18-openshift)"], "stdout": "imagestream \"spring-rest\" created\nbuildconfig \"spring-rest-pipeline\" created\nbuildconfig \"spring-rest\" created", "stdout_lines": ["imagestream \"spring-rest\" created", "buildconfig \"spring-rest-pipeline\" created", "buildconfig \"spring-rest\" created"]}
	to retry, use: --limit @/home/esauer/src/openshift-applier/playbooks/openshift-cluster-seed.retry

PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost                  : ok=66   changed=4    unreachable=0    failed=1   

Create a .NET Core Pipeline

I have numerous people requesting to start using .NET core on OpenShift. The pipeline I created for .NET is based off of the spring boot pipeline.

Unable to deploy skopeo or image mirror examples for multi-cluster-spring-boot project

I followed the quickstart instructions to create a pipeline in the Dev cluster. When I log in to the Dev cluster and run the command

ansible-playbook -i skopeo-example/.applier/inventory-dev/ galaxy/openshift-applier/playbooks/openshift-cluster-seed.yml

the stage project is created (not the dev) but I get the following error:

TASK [openshift-applier : Apply OpenShift objects based on template with params for 'deployments : deploy jenkins'] ********************************************************************
failed: [localhost] (item={'oc_option_f': ' -f', 'oc_path': '/home/cziesman/workspace/container-pipelines/multi-cluster-spring-boot/skopeo-example/.applier/params/jenkins', 'oc_process_local': ' --local', 'local_path': True}) => {"ansible_loop_var": "oc_param_file_item", "changed": true, "cmd": "oc process       openshift//jenkins-ephemeral   -n multicluster-spring-boot-dev       --param-file=\"/home/cziesman/workspace/container-pipelines/multi-cluster-spring-boot/skopeo-example/.applier/params/jenkins\"   --ignore-unknown-parameters   | oc apply   -n multicluster-spring-boot-dev   -f -      \n", "delta": "0:00:00.396024", "end": "2020-02-25 19:30:34.542195", "failed_when_result": true, "msg": "non-zero return code", "oc_param_file_item": {"local_path": true, "oc_option_f": " -f", "oc_path": "/home/cziesman/workspace/container-pipelines/multi-cluster-spring-boot/skopeo-example/.applier/params/jenkins", "oc_process_local": " --local"}, "rc": 1, "start": "2020-02-25 19:30:34.146171", "stderr": "error: unable to process template\n  processedtemplates.template.openshift.io is forbidden: User \"cziesman-redhat.com\" cannot create processedtemplates.template.openshift.io in the namespace \"multicluster-spring-boot-dev\": no RBAC policy matched\nerror: no objects passed to apply", "stderr_lines": ["error: unable to process template", "  processedtemplates.template.openshift.io is forbidden: User \"cziesman-redhat.com\" cannot create processedtemplates.template.openshift.io in the namespace \"multicluster-spring-boot-dev\": no RBAC policy matched", "error: no objects passed to apply"], "stdout": "", "stdout_lines": []}

I am using a 3.11 cluster from CFME.

When I try to run the image mirror example, the instructions say to create the image-mirror-example/.applier/params/prod-cluster-credentials file, but there is no guidance on what values to use for <API_URL> or <REGISTRY URL>, so I am unable to create the projects that way, either.

Builds and Deployments use different label key to indicate an App

It looks like there is a pattern in our pipelines where the build templates use the label application, but the deployments use the label app.

$ cat basic-nginx/.openshift/*/*.yml | grep app.*:
      application: ${APPLICATION_NAME}
      application: ${APPLICATION_NAME}
      application: ${APPLICATION_NAME}
      app: ${APPLICATION_NAME}
          app: ${APPLICATION_NAME}
      app: ${APPLICATION_NAME}
      app: ${APPLICATION_NAME}
      app: ${APPLICATION_NAME}
      app: ${APPLICATION_NAME}

We should pick one and be consistent.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.