Coder Social home page Coder Social logo

fabric8-pipeline-library's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fabric8-pipeline-library's Issues

new function to promote the release to all the environments in order with optional human approval

we define the environments for a team in the fabric8-environments ConfigMap

Rather than having lots of different jobs that include Staging and/or Production, it might be just nice to have a PromoteAll job and function that promotes a release to all environments that are defined in the ConfigMap?

Possibly adding an include/exclude list too? e.g. you typically wanna exclude Test.

So something like

promoteToEnviroments()

which default to something like

promoteToEnviroments(excludes=['Test'], includes=['*'])

Or something like that?

automate github release notes for non-npm projects too if using Conventional Commits?

So I'm loving the github release notes we generate for npm projects:
https://github.com/fabric8-ui/fabric8-runtime-console/releases

e.g. if a project is using the Conventional Commits (http://conventionalcommits.org/) format for commit messages then we can generate nice release notes for the project.

I wonder if we could start to enable this on all java & go projects too if they opt in to using Conventional Commits? Maybe it could be a flag we enable in the Jenkinsfile or something?

lets add the ability to use the current branch name to decide if we do a full release, CI job or developer build

we could use branch names and naming conventions/patterns to decide which branches are

  • production release branches (where each release creates a new real release version number, artifacts and docker image
  • CI or PR jobs (where unit tests are ran, maybe a local snapshot image is built and tested locally but nothing is pushed for realz)
  • a developer editing branch - where a developer image is updated and used in a developer namespace (so each change is running quickly in a users namespace) to give a kinda RAD editing environment

We could use variables to define the patterns used to differentiate between the kinds of builds. e.g. branches called master or starting with release could be the default production releases; branches starting with editing- could be developer editing branches and anything else assumed to be CI / PR branches?

Then if folks fork a master branch, they get a new CI build for the changes they push

Utils.groovy findTagSha logic

The following assumption in the code is not true in any of the OpenShift deployments I have tried

def findTagSha(OpenShiftClient client, String imageStreamName, String namespace) {

...

// latest tag is the first
TAG_EVENT_LIST:
for (def list : tags) {

The order of the tags in an ImageStream seems to be random, so picking the first tag found does not work reliably.

eg.

status:
dockerImageRepository: 172.30.209.124:5000/mta/simontest123
tags:

  • items:
    • created: 2017-05-09T23:58:58Z
      dockerImageReference: 172.30.209.124:5000/mta/simontest123@sha256:7b92ede95898259a8976fbd0013f81309c330b7a0a4d4b794f98bb08174e62a3
      generation: 1
      image: sha256:7b92ede95898259a8976fbd0013f81309c330b7a0a4d4b794f98bb08174e62a3
      tag: 6ea89bb
  • items:
    • created: 2017-05-10T01:18:03Z
      dockerImageReference: 172.30.209.124:5000/mta/simontest123@sha256:59e235aeabc89a3038cc16275c8d3cd7d70a16cfee1f45a1484a890acaae51db
      generation: 1
      image: sha256:59e235aeabc89a3038cc16275c8d3cd7d70a16cfee1f45a1484a890acaae51db
      tag: 7d0ef5a
  • items:
    • created: 2017-05-10T01:02:03Z
      dockerImageReference: 172.30.209.124:5000/mta/simontest123@sha256:4bbf3a31a5474d2455b3c005f55c1d94b23c41324089e4bca710b8f3e86cc037
      generation: 1
      image: sha256:4bbf3a31a5474d2455b3c005f55c1d94b23c41324089e4bca710b8f3e86cc037
      tag: e2b3b93
  • items:
    • created: 2017-05-10T00:57:41Z
      dockerImageReference: 172.30.209.124:5000/mta/simontest123@sha256:2ef9f96201fe7b349ba0fb3afcb9f630d4662c4c59896803cb4e4bd7e732c1b9
      generation: 1
      image: sha256:2ef9f96201fe7b349ba0fb3afcb9f630d4662c4c59896803cb4e4bd7e732c1b9
      tag: e5ad8f0

Fabric8 always picks up old image 7b92ede95898259a8976fbd0013f81309c330b7a0a4d4b794f98bb08174e62a3 and deploys it to staging and production, when it should have used the newer image 59e235aeabc89a3038cc16275c8d3cd7d70a16cfee1f45a1484a890acaae51db

apps fail if short git sha starts with a zero

I created a .Net app and the version used is the short git sha of 0517806. When the application deployment config yaml was applied the version changed to 517806.0 which means the image stream isn't found.

error starting build pod java.io.IOException: Pipe not connected

Seems to happen when there's low resources or multiple jobs running. This has been seen on OSO and GKE.

Executing shell script inside container [maven] of pod [kubernetes-137ebf2065f949d4acac4e019ed07af7-1e96524904d1e]
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline

GitHub has been notified of this commit’s build result

java.io.IOException: Pipe not connected
	at java.io.PipedOutputStream.write(PipedOutputStream.java:140)
	at java.io.OutputStream.write(OutputStream.java:75)
	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:125)
	at hudson.Launcher$ProcStarter.start(Launcher.java:384)
	at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:157)
	at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:63)
	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:172)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:184)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:126)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
	at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
	at stageProject.call(/var/jenkins_home/jobs/fabric8-cd/jobs/fabric8-maven-plugin/branches/master/builds/16/libs/github.com/fabric8io/fabric8-pipeline-library/vars/stageProject.groovy:18)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
	at sun.reflect.GeneratedMethodAccessor239.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:74)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:165)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Finished: FAILURE

Exception: "Scripts not permitted to use new io.fabric8.openshift.client.DefaultOpenShiftClient"

Aloha,

currently the Fabric8-Jenkins-Build-Job runs into an weird exception when creating a new project using v2.2.192.

I suppose it's in some fashion (or not) related with the following line but I'm not sure:

return new DefaultOpenShiftClient().isAdaptable(OpenShiftClient.class)

The original stack trace can be found here:
https://gist.github.com/anonymous/8b6b08236331677d24e42ff62edf571b

Thanks for any hint,
Qaiser

Use semver to work out maven project release versions

Currently our maven Jenkinsfile library works out the release version using the jenkins build number which is bad if the jenkins job is ever recreated.

We could extract this semver code that java fabric8 projects uses itself to work out the next version.

https://github.com/fabric8io/fabric8-pipeline-library/blob/master/src/io/fabric8/Fabric8Commands.groovy#L180-L209

Then call this new function from the Jenkinsfiles here https://github.com/fabric8io/fabric8-jenkinsfile-library/blob/master/maven/CanaryReleaseStageAndApprovePromote/Jenkinsfile#L25

Bonus points for adding the first unit test for the library too ;)

create a github milestone and tag all closed issues (which have no milestone) with the current milestone

it'd be awesome to create milestones every time we do a release where there are closed issues which are not associated with a milestone

Then we can easily see what releases got fixed in what version - all done mostly automatically. (folks can always update the milestone on the issue after the release).

So how about a function, githubCreateMilestone(String version) which would:

  • find all closed issues with no milestone
  • if there are any issues not marked with a milestone then really create a github milestone & associate those issues
  • we may want to avoid creating a milestone for releases with no fixed issues maybe? I guess that could be flag?

reduce logging of hubot

When I have a pipeline without any hubot configured I still see:

[Pipeline] hubotApprove
Hubot sending to room fabric8_default => Would you like to promote version 2.0.1 to the next environment?

    to Proceed reply:  fabric8 jenkins proceed job maxandersen/mirror/master build 1
    to Abort reply:    fabric8 jenkins abort job maxandersen/mirror/master build 1

No service hubot  is running!!!
No service found!

Two things come to mind:

  1. why does it even tell me about hubot when I don't have it running. I assume for many this would be the default and thus 9 lines of output is wasteful. If it must do something maybe only do 1 line like Service hubot is not running!

  2. shouldn't enable/disablement of features in the shared jenkins pipeline be something controlled by the user ? i.e. by the user enable/disabling extensions rather than it is defined in the world global: github.com/fabric8io/fabric8-pipeline-library@master ?

lets use the new API to query if a job name / branch name / gitUrl is a CI / CD / Developer pipeline

the current isCI() and isCD() functions should now delegate to a getPipeline() helper method which should lazily invoke this code: https://github.com/fabric8io/fabric8/blob/master/components/kubernetes-api/src/main/java/io/fabric8/kubernetes/api/pipelines/Pipelines.java#L34

and cache the Pipeline object (transiently!) around for the lifetime of a job - lazily requerying if its null.

something kinda like...

def isCI(){
  return getPipeline().isCI()
}
def isCD(){
  return getPipeline().isCD()
}

// TODO not sure if this works ;) just trying to cache this value for later
def transient _pipeline: Pipeline = null;

def getPipeline() {
      def kubernetes = new DefaultKubernetesClient()
      def namespace = kubernetes.getNamespace()
      // TODO ensure that BRANCH_NAME and GIT_URL are populated!
  return io.fabric8.kubernetes.api.pipelines.Pipelines.getPipeline(kubernetes, namespace, env); 
}

don't fail the pipeline when updating project dependencies

we noticed today that if we get an error in the pipeline when updating downstream projects the entire build fails. Perhaps we dont want to do this and catch the error, log it and continue to the next project?

The error in this case was no permissions to create the updateVersion branch in the downstream project.

Missing script approvals configuration in Jenkins CI

Using fabric8-pipeline-library@master (commit 3f84b0b).
Missing script approvals configuration in Jenkins CI.

I've added approvals manually in "In-process Script Approval" page but is there a way to configure "Signatures already approved" list at fabric8 CI/CD creation ?

org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use staticMethod jenkins.model.Jenkins getInstance
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectStaticMethod(StaticWhitelist.java:192)
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onStaticCall(SandboxInterceptor.java:142)
 		at org.kohsuke.groovy.sandbox.impl.Checker$2.call(Checker.java:180)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedStaticCall(Checker.java:177)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:91)
 		at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall$0.callStatic(Unknown Source)
 		at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
 		at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
 		at io.fabric8.Fabric8Commands.getCloudConfig(Fabric8Commands.groovy:711)
 		at io.fabric8.Fabric8Commands$getCloudConfig$0.call(Unknown Source)
 		at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
 		at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
 		at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
 		at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
 		at mavenTemplate.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/19/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenTemplate.groovy:14)
 		at mavenNode.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/19/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenNode.groovy:8)
 		at WorkflowScript.run(WorkflowScript:30)
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method jenkins.model.Jenkins getCloud java.lang.String
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectMethod(StaticWhitelist.java:178)
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:119)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
 		at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall$0.callStatic(Unknown Source)
 		at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
 		at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
 		at io.fabric8.Fabric8Commands.getCloudConfig(Fabric8Commands.groovy:711)
 		at io.fabric8.Fabric8Commands$getCloudConfig$0.call(Unknown Source)
 		at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
 		at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
 		at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
 		at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
 		at mavenTemplate.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/20/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenTemplate.groovy:14)
 		at mavenNode.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/20/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenNode.groovy:8)
 		at WorkflowScript.run(WorkflowScript:30)

One last:

approval: method io.fabric8.kubernetes.client.KubernetesClient services
callers:
at io.fabric8.Fabric8Commands.hasService(Fabric8Commands.groovy:637)
at io.fabric8.Fabric8Commands$hasService$1.call(Unknown Source)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
at sonarQubeScanner.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/21/libs/github.com/fabric8io/fabric8-pipeline-library/vars/sonarQubeScanner.groovy:15)
at mavenCanaryRelease.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/21/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenCanaryRelease.groovy:62)

Jenkinsfiles should assert that the rollout to Staging/Production succeeds

Right now a Pipeline could fail to get a new version of a pod running in an enviroment (e.g. the pod never becomes ready - maybe due to quota issues or a missing environment specific Service, Secret or ConfigMap or something.

Currently once the apply is done, the kubernetesApply() just assumes everything's great and carries on.

It would be nice to have a better flavour of this which does the Arquillian equivalent of this line:

             assertThat(kubernetesClient).deployments().pods().isPodReadyForPeriod();

Then the pipeline would wait for the pods to go green & be ready (readiness checks + liveness checks kick in) - if things don't work it'd barf the build.

Maybe extra bonus points would be to automatically rollback the Deployment change if the new version doesn't startup correctly?

lets generate release notes HTML reports in pipelines

there's a number of maven plugins and tools out there for generating release notes based on git commit history and fixed issues on github etc. Here's some of them:

it'd be nice to include this OOTB in our release pipelines. So I guess we need to try some of these tools and see which ones work well, generate nice HTML output and work well with github issues etc.

Then if we can package it up in a docker image we can start to include it OOTB in our release pipelines (maybe making it optional via an environment variable or something) so folks can disable it if they wish?

Error in provisioning; slave=KubernetesSlave

Without any changes to our Jenkinsfile, our build started to fail. In Jenkins log we see:

Feb 27, 2017 10:49:12 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
SEVERE: Error in provisioning; slave=KubernetesSlave name: kubernetes-8ca638566b324447ad9fed48eccf8a81-33a3f3091084a, template=org.csanchez.jenkins.plugins.kubernetes.PodTemplate@6db9af81
java.lang.NullPointerException
	at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.combine(PodTemplateUtils.java:59)
	at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.lambda$combine$14(PodTemplateUtils.java:118)
	at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321)
	at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
	at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.combine(PodTemplateUtils.java:118)
	at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.unwrap(PodTemplateUtils.java:164)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.getPodTemplate(KubernetesCloud.java:375)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.access$000(KubernetesCloud.java:87)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:555)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:532)
	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

This started to happen when 088d36f was committed.

The Jenkinsfile is:

#!/usr/bin/groovy
@Library('github.com/fabric8io/fabric8-pipeline-library@master')

def failIfNoTests = ""
try {
  failIfNoTests = ITEST_FAIL_IF_NO_TEST
} catch (Throwable e) {
  failIfNoTests = "false"
}

def localItestPattern = ""
try {
  localItestPattern = ITEST_PATTERN
} catch (Throwable e) {
  localItestPattern = "*KT"
}


def versionPrefix = ""
try {
  versionPrefix = VERSION_PREFIX
} catch (Throwable e) {
  versionPrefix = "1.0"
}


def utils = new io.fabric8.Utils()
def canaryVersion = "${versionPrefix}.${env.BUILD_NUMBER}"
def label = "buildpod.${env.JOB_NAME}.${env.BUILD_NUMBER}".replace('-', '_').replace('/', '_')

mavenNode{
    checkout scm

  echo 'NOTE: running pipelines for the first time will take longer as build and base docker images are pulled onto the node'
  container(name: 'maven') {

    stage 'Build Release'
    mavenCanaryRelease {
      version = canaryVersion
    }

    stage 'Integration Test'
    mavenIntegrationTest {
      environment = 'Testing'
      failIfNoTests = localFailIfNoTests
      itestPattern = localItestPattern
    }
	
  }
}

Workaround

I found that by using @Library('github.com/fabric8io/fabric8-pipeline-library@versionUpdate3f1bf454-700d-4274-9e22-3b6bab4361bc') it does build.

It would be good if there is a stable branch and perhaps more user friendly branch names.

use different location for local maven repo based on OSO versus OSD

When using OSO we're gonna be restricting builds to 1 concurrent build per user. In which case its safe to have a write once PV for the local mvn repo for doing CD releases or for snapshot builds

However when using OSD we probably want to use the job workspace as the local maven repository so that we can have parallel builds running and avoid overwriting each other or causing inconsistencies in the builds.

So we maybe need a configuration to know if we can use a single read/write once; use a read/write many or use workspace based persistence for builds. Some folks may want to disable persistence too maybe?

Maybe we need a ConfigMap we load to configure these kinds of things?

support a PR comment to skip automatic release

There are occasions where a readme update or CI change to say a Jenkins plugin for example may mean a developer doesn't want a full release automatically triggered.

CD pipelines should check the PR comment to see if we have a @fabric8cd skip release or something similar.

long running approval jobs seem to get locked up?

I had a job waiting on the approval step that I left for 11 hours on DevTools OSO and the Proceed & Abort links in the jenkins console didn't seem to do anything any more. I eventually had to just kill the build.

I wonder if the build pods go unresponsive after a while?

Incorrect namespace created

Steps to reproduce

  1. Add def envStage = utils.environmentNamespace('my-project') to a Jenkinsfile
  2. Run in Jenkins

Expected

We would expect a new namespace of 'my-project' is created.

Actual
'default-my-project' is created.

if planner (work item tracker) is running we should POST an update to the REST API when we have promoted a build

for background see this issue:
fabric8-services/fabric8-wit#726

essentially if we can detect planner / workitem-tracker is running (e.g. via a kubernetes Service being present or via a configuration as per this issue: #74) and when we have the new REST API as per fabric8-services/fabric8-wit#726 then when a kubernetesApply() is done and the deployment has completed we should post the necessary JSON to the REST API so that the workitem tracker can update the issue with a comment that something is ready for test etc

No clear upgrade path

The following code from a jenkinsfile used to work:

kubernetes.pod('buildpod').withImage('<ip address>:80/shiftwork/jhipster-build')
      .withPrivileged(true)
      .withHostPathMount('/var/run/docker.sock','/var/run/docker.sock')
      .withEnvVar('DOCKER_CONFIG','/home/jenkins/.docker/')
      .withSecret('jenkins-docker-cfg','/home/jenkins/.docker')
      .withSecret('jenkins-maven-settings','/root/.m2')
      .withServiceAccount('jenkins')
      .inside {

Now however it results in an error.

hudson.remoting.ProxyException: groovy.lang.MissingMethodException: No signature of method: static io.fabric8.kubernetes.pipeline.Kubernetes.withPrivileged() is applicable for argument types: (java.lang.Boolean) values: [true]
	at groovy.lang.MetaClassImpl.invokeStaticMissingMethod(MetaClassImpl.java:1503)
	at groovy.lang.MetaClassImpl.invokeStaticMethod(MetaClassImpl.java:1489)
	at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:897)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodN(ScriptBytecodeAdapter.java:168)
	at io.fabric8.kubernetes.pipeline.Kubernetes$Pod.methodMissing(Kubernetes.groovy)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
	at groovy.lang.MetaClassImpl.invokeMissingMethod(MetaClassImpl.java:941)
	at groovy.lang.MetaClassImpl.invokePropertyOrMissing(MetaClassImpl.java:1264)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1024)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:812)
	at io.fabric8.kubernetes.pipeline.Kubernetes$Pod.invokeMethod(Kubernetes.groovy)
	at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:103)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
	at WorkflowScript.run(WorkflowScript:33)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
	at sun.reflect.GeneratedMethodAccessor240.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:58)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:163)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:63)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Perhaps I missed a blog post. But I am unaware of any upgrade path from a previous version.

It would seem the withPrivileged() method has not be deprecated, it has just been removed.

https://github.com/fabric8io/fabric8-jenkinsfile-library/search?utf8=%E2%9C%93&q=withPrivileged
https://github.com/fabric8io/fabric8-pipeline-library/search?utf8=%E2%9C%93&q=withPrivileged

It would be good to know how to upgrade.

Installed relevant Jenkins plugins

Below is the relevant plugins we have installed.
selection_657
These two pluggins appear to be child plugins of https://github.com/jenkinsci/kubernetes-pipeline-plugin.

Analysis

It seems there are several related projects. In order to get more clarity Ive compiled the below table.

Name Active Has withPrivileged() Comments
jenkins-pipeline-library No - Deprecated Yes
fabric8-pipeline-library Yes No
kubernetes-plugin Yes Yes Kubernetes Pipeline is Jenkins plugin which extends Jenkins Pipeline to allow building and testing inside Kubernetes Pods reusing kubernetes features like pods, build images, service accounts, volumes and secrets while providing an elastic slave pool (each build runs in new pods).
fabric8-jenkinsfile-library Yes No
kubernetes-pipeline-plugin Yes 1.4-SNAPSHOT Yes Uses io.fabric8.kubernetes.pipeline package name, yet not in fabric8 github project. Kubernetes Pipeline is Jenkins plugin which extends Jenkins Pipeline to provide native support for using Kubernetes pods, secrets and volumes to perform builds.

Light / modular template

The pod templates that we are currently using (e.g. the maven template) refer to resources managed by gofabric8 and are used to store stuff like settings, ssh keys, gnupgp keys and more.

It would be nice, if we had a flavor of the templates, without all these fixed resources, or if those were optional.

Something like this would allow the user to get started, regardless of how he setup the environment or what jenkins image he uses. Of course, he wouldn't be able to enjoy the Fabric8 in its full length, but he could easily hack a pipeline that does a maven build, run the integration/system tests and even update internal environments. Then he gradually adds more things to the mix. I think that a step by step approach is really important, as it gives time to the user to digest and better understand how to use our stuff. It also gives us more flexibility.

The implementation is the tricky part....

What I'd like to avoid is an endless chain of if then else.
What I'd also like to avoid is having tons of different templates for the same thing.

What could possibly make sense here, is to leverage template nesting / composition.
So we could have something like a light maven template called withMaven and additional templates that attach the secrets or the rest of the resources (e.g. to define the ssh keys: withSsh). We could then bind them together:

withMaven(mavenImage: 'maven:3.3.9') {
    withSsh('jenkins-ssh') {
        withGpg('jenkins-gpg') {
            //do stuff
        }
    }
}

And if this is starting to getting verbose, we could hack the withFabric8 that adds the things we need with a simple declaration.

lets make it easier to configure some aspects of pipelines via a ConfigMap in Kubernetes/OpenShift

e.g. things like which branches are CD release branches versus CI branches/PRs versus developer branches (run tests + re-run apps fast) - see #3

We also should make it easy to enable/disable various features like:

  • generate maven site report
  • generate changelog report
  • run sonarqube
  • run baysian reports
  • run selenium tests

I'm not sure the perfect approach; do we use the fabric8.yml file to enable/disable those features? Or use a ConfigMap?

Either way we should come up with a standard function to wrap that up so that we can make the pipelines configurable to enable/disable feature flags from a nice UI or CLI tool - without users having to hack groovy source etc

PodTemplate should accept claimNames as parameters.

Bugs, quotas, provisioning cost (it often takes a while until a pvc is bound) sometimes make working with PVCs a PITA.

I should be able to pass different template names as parameters and if none is passed the podTemplate should be ephemeral.

java pipelines should support using next pom or git tag version rather than using jenkins build number

This is just a quick thought whilst it's on my mind..

To get the next pom version we could use something like this in the mavenCanaryRelease.groovy

What's missing is the PR to update the next pom version number. This isn't a great approach and we could use what fabric8 does and base the next version on incrementing the latest git tag. This means no code changes are needed for the next version.

#!/usr/bin/groovy
def call(body) {
    // evaluate the body block, and collect configuration into the object
    def config = [:]
    body.resolveStrategy = Closure.DELEGATE_FIRST
    body.delegate = config
    body()

    def flow = new io.fabric8.Fabric8Commands()
    def s2iMode = flow.isOpenShiftS2I()
    echo "s2i mode: ${s2iMode}"
    def m = readMavenPom file: 'pom.xml'
    def version

    sh "git checkout -b ${env.JOB_NAME}-${env.BUILD_NUMBER}"

    if (config.version){
        version = config.version
        sh "mvn org.codehaus.mojo:versions-maven-plugin:2.2:set -U -DnewVersion=${version}"
    } else {
        sh 'mvn build-helper:parse-version versions:set -DnewVersion=\\\${parsedVersion.majorVersion}.\\\${parsedVersion.minorVersion}.\\\${parsedVersion.nextIncrementalVersion} '
        m = readMavenPom file: 'pom.xml'
        version = m.version
    }

    sh "mvn clean -e -U deploy"

    if (flow.isSingleNode()){
        echo 'Running on a single node, skipping docker push as not needed'

        def groupId = m.groupId.split( '\\.' )
        def user = groupId[groupId.size()-1].trim()
        def artifactId = m.artifactId

       if (!s2iMode) {
           sh "docker tag ${user}/${artifactId}:${version} ${env.FABRIC8_DOCKER_REGISTRY_SERVICE_HOST}:${env.FABRIC8_DOCKER_REGISTRY_SERVICE_PORT}/${user}/${artifactId}:${version}"
       }
    } else {
      if (!s2iMode) {
        retry(3){
          sh "mvn fabric8:push -Ddocker.push.registry=${env.FABRIC8_DOCKER_REGISTRY_SERVICE_HOST}:${env.FABRIC8_DOCKER_REGISTRY_SERVICE_PORT}"
        }
      }
    }

    if (flow.hasService("content-repository")) {
      try {
        sh 'mvn site site:deploy'
      } catch (err) {
        // lets carry on as maven site isn't critical
        echo 'unable to generate maven site'
      }
    } else {
      echo 'no content-repository service so not deploying the maven site report'
    }
  }

approve step does not seem to resume after jenkins restart

I just got this after a pipeline was in the approve state for a while. Am guessing jenkins pod got killed:

Proceed or Abort
Resuming build at Tue Apr 18 18:50:27 UTC 2017 after Jenkins restart
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesn’t have label jenkins-slave-rcklt-5vn7r

Approve step outside of a mavenNode{...} definition doesn't terminate build pod

Testing Using a Jenkinsfile of:

#!/usr/bin/groovy
@Library('github.com/fabric8io/fabric8-pipeline-library@master')
def dummy
mavenNode{
  container('maven'){
    echo 'inside build pod'
  }
}
node{
    approve {
      room = null
      version = '1.0.0'
      console = null
      environment = 'Stage'
    }
}

The build pod is kepy running until the jobs has finished rather than at the closing parenthesis of the mavenNode, this means that build pods will stick around during the approve step which is a waste of resources.

@iocanel suggested trying this fabric8io/kubernetes-plugin@2b4f6d8 which works great. I wonder however if instead we need to mark the build pod as complete list the openshift s2i build pods?

Add a mechanism to share binaries between containers of the same pod.

In some cases we may need to add something extra to a container, without having to recreate an image for it. Since the pod templates are already leveraging multiple container it would be nice to have a tool, that would allow us to ask a container to share something found in its path, by moving it inside the worksapce. This would allow other containers to use that if needed.

commit generated non java deployment / deployment config yamls to source code repo

Folks want to customise the generated deployment / deployment config yamls, in java this is done with the help of the fabric8-maven-plugin. For non java pipelines we use the shared function https://github.com/fabric8io/fabric8-pipeline-library/blob/master/vars/getDeploymentResources.groovy, we should raise a PR to merge the parameterised yaml so folks can customise in their repo and when the pipeline runs it will still replace the version number, project name, labels etc.

Adding support for Documentation

The pipeline library steps allows to generate gh-pages based on maven profile -Pdoc-html and -Pdoc-pdf, typically replicate whats is done as part of the fabric8 tools/ci-docs.sh utility.

we can then add method to release.groovy like

def documentation(project) {
  Model m = readMavenPom file: 'pom.xml'
  generateWebsiteDocs {
    project = project[0]
    releaseVersion = project[1]
    artifactId = m.artifactId
  }
}

Which will generate the documentation and push to gh-pages branch of the repo

PodTemplates should be optionally named

The next release kubernetes-plugin will allow the user to name the build pod, based on the name set on the PodTemplate (currently are named kubernetes-xxx-yyy-zzz which is meaningless).

So it would be great if we were able to optionally pass a name.

Since for the biggest part we are composing PodTemplates I am not sure if it makes sense to use naming by type (e.g. maven, go, nodejs) though this could possibly be a default value.

An other approach would be to name templates in the same manner as we label them (by job name and build number). This would allows to easily correlate a build pod with a specific jenkins build. For example the pod would be named something like myproject-12-xxx-yyy-zzz.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.