istio / test-infra Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Currently automated qualification uses master as its head branch. Any new commit pushed to master will trigger another round of tests. We should use a specific branch for head such that it does not change and delete that branch when PR is closed or merged.
We need to publish code coverage to cloud storage in order to get publish to velodrome.
Currently each bazel build will download workspace dependencies. Same thing for go deps.
Note that I tried using a NFS mount point on each slave, but bazel just hangs writing a lock.
My next try will be to use have a daily jobs that persist each module bazel cache and mount it using aufs on each slave. This would require changing the slave entrypoint and probably using a NFS in conjonction with aufs mount.
I propose that we customize the link posted with different Prow job statuses.
I have the following use case in mind:
...job starts...
plank posts to Github with the prow/deck logs link
...the job completes...
plank posts to Github with the gubernator job link.
What is the testing policy for new features.
Error logs from hook:
// ...
{"error":"error unmarshaling /etc/config/config: error converting YAML to JSON: yaml: line 202: did not find expected key","level":"error","msg":"Error loading config.","path":"/etc/config/config","time":"2017-07-24T16:26:00Z"}
// ...
This will help us understand where we need to focus our efforts:
https://github.com/kubernetes/test-infra/tree/master/velodrome/
PRs might leave behind stale resources in the cluster due to accidents or developer error.
We should try to reset the cluster every few hours / daily.
Currently, we have several metrics tracking issues, pull-request-related info on velodrome.istio.io In order to monitor developing process and support external contributors better, extra metrics are required like build duration, recent build failure rate, code coverage changing. Release support is also something we are interested.
We need to define the testcases that will cover all features to be delivered in alpha and implement them.
Currently Manager e2e test run directly on Jenkins Cluster. Another cluster should be created and plugged in to the tests.
This process is currently manual, and because of that never gets done which impact our release schedule
In order to reduce code duplication in Jenkinsfile, we need to create libaries according to https://jenkins.io/doc/book/pipeline/shared-libraries/
note: posting after the fact to have record of the issue
need to specify prowjob_namespace
in config.yaml
currently seeing:
{"error":"error listing prow jobs: response has status \"404 Not Found\" and body \"{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"the server could not find the requested resource\",\"reason\":\"NotFound\",\"details\"{},\"code\":404}\n\"","level":"error","msg":"Error syncing periodic jobs.","time":"2017-07-25T22:43:39Z"}
The list of comparision should include:
and point out the pain points of our existing Jenkins bases infra.
Jenkins libraries resources are very limited. It would be good to have a submodules with all checks (linter, codecov, docker, etc...) checked in such that we can update all modules at once.
Change would be implemented on the sumodules and then a PR to each module would be necessary to enable it, which will also include potential change to make the checks pass.
That way we could also call just one method that does all the checks, and user would still be able to run the checks from the repo they are working on. (Which is impossible with Jenkins pipeline library resource)
Use Github API for creating annotated tags
Use Github API to upload the tar(s) created by create_release_archives.sh
Here are some information about the release
https://github.com/istio/istio/blob/master/release/README.md
This will speed up build time.
Due to the fact that credentials archive in cloud storage needs to be the same as the commit on the branch it forces us to push to the branch directly which might me difficult when multiple contributors are working on istio-testing. We should move backup to another repo and move backup and create pd scripts there was well.
across the istio repos, all of the build status badges are broken. This appears to be because the link to jenkins requires "login" now.
Currently the pkg_check
tool reads per-package code coverage requirement from a local file. If a package does not have an entry in the file, the code coverage requirement is effectively 0 (a.k.a. no tests are needed at all).
It would be better if we can set code coverage requirement for the whole repo, and uses the file to lower the coverage for certain package (e.g. a package only containing a main method).
Codecov.io is mostly working with Jenkins on istio.io/manager. We still need to generate coverage reports upon successful merge into the master branch so codecov.io can properly generate coverage % diff against master (see warning message below). I think we just need to re-run coverage report on post-submit/commit trigger.
No coverage uploaded for pull request base (master@f3fdddb). Click here to learn what that means.
Istio is going to be running in its own namespace. Same thing for istio-ca. It would be helpful to have a manager pool of cluster to do the testing such that each test does not need to wait for cluster creation.
let s use this for qualification at first, not for smoke testing.
Once PR have moved to master, there is no point to check code coverage and cla again. However we cannot disable them. We need to add a flag that list all the checks that we can ignore:
curl https://api.github.com/repos/istio/mixer/status/bb04cef3c86676529737166b7db7dd40e2670739
"statuses": [
{
"url": "https://api.github.com/repos/istio/mixer/statuses/bb04cef3c86676529737166b7db7dd40e2670739",
"id": 1129469319,
"state": "failure",
"description": "The Travis CI build failed",
"target_url": "https://travis-ci.org/istio/mixer/builds/218654453",
"context": "continuous-integration/travis-ci/push",
"created_at": "2017-04-04T22:18:07Z",
"updated_at": "2017-04-04T22:18:07Z"
},
{
"url": "https://api.github.com/repos/istio/mixer/statuses/bb04cef3c86676529737166b7db7dd40e2670739",
"id": 1129471994,
"state": "success",
"description": "Jenkins job istio/mixer-pr-stable passed",
"target_url": "https://testing.istio.io/job/istio/job/mixer-pr-stable/19/",
"context": "istio/mixer-pr-stable",
"created_at": "2017-04-04T22:19:12Z",
"updated_at": "2017-04-04T22:19:12Z"
},
{
"url": "https://api.github.com/repos/istio/mixer/statuses/bb04cef3c86676529737166b7db7dd40e2670739",
"id": 1129472448,
"state": "success",
"description": "All necessary CLAs are signed",
"target_url": null,
"context": "cla/google",
"created_at": "2017-04-04T22:19:22Z",
"updated_at": "2017-04-04T22:19:22Z"
}
We should delete all namespace that are not accounted for
Currently we have a PD that we use for Jenkins. We create backup of config file each time we make changes or update plugins. It would be nice to generate a new PD from the backup in case something bad happen to the PD.
Because we post message when a presubmit passed using a golang program we need to run maniFlow{} on a slave. It would be good to have it run on a master executor to free up slaves.
to make layered builds
PR incoming
/assign ldemailly
Background:
In a previous system (proprietary, not OSS), we had a list of packages with their baseline coverage and perf numbers. Checkins were not allowed to enter the repo if these numbers regressed. Developers could change the baselines at will in the case a regression was inevitable. So the point was not to prevent forward progress, it was more to prevent accidental regressions. We also had a process that would regularly increase any coverage or perf numbers based on observed current levels. So the numbers would naturally tend to get better over time and never backslide.
So imagine just having a file at the top of the repo that lists the various packages and their coverage numbers. As part of the checking process, we run a little script that compares the output of the coverage test run with what's in the file, and then fails the commit if there's a regression.
In every repo we could have a codecov.info file which defines the threshold and we could have the script in the toolbox repo such that all modules can use it.
Our current release note generation is manual and therefore nonexistent. We can get assistance from @david-mcmahon and team on how to set this.
As part of istio release 0.2, we ll be publishing debian packages. Those package will be available for people to download and install manually, but we want to create a repository available to the public.
For short term we can use Google internal tools to do that, but this means that only Googler will be able to update the repo with release packages.
An alternative would be to create and maintain our own repo until we can push it to official debian repos. If creating our own repo, we need to be able to publish debian artifacts in a central location and have the repo make it available.
Based on istio/istio#186
We want to create another cluster for E2E testing and setup different runs/
Mungegithub is going to be used for:
Artifacts link is defined here
https://github.com/istio/test-infra/blob/master/src/org/istio/testutils/GitUtilities.groovy#L152
It should be updated to list all docker images created and pushlished.
Everything is documented in the Readme. We might want to create options for depoying kubernetes, persistent disk and hazelcast.
The script attempted to delete master branch. We should only delete branch that we created.
prow: https://github.com/kubernetes/test-infra/tree/master/prow
Currently we are using Jenkins, because some of the workflow are not supported by prow:
Let's work with spxtr@ to see if this is something we might do.
Bazel uses the SHA of the workspace to define where to store cache. Since we have multiple projects for the same module, we have a different cache for each. It would be good to share the cache to speed up builds.
We can overwrite when Jenkins checkout the code in Jenkins, as we do for goBuildNode.
Note that we may want to also speed up builds by separating workspaces when we do builds using different configs like asan, tsan, msan, release, etc...
We need a tool that will:
Look at commits from master and create a PR on stable branch. This will trigger other tests. For each module we 'll mark some tests as required.
The tool will look for PRs on stable, and if required check are passing, it will create a tag on the commit, and fast forward stable to that commit, and close the PR.
I can case of failed checks, the PR will be closed.
When starting prowbazel:0.2.1 without privilege the container dies with
In theory we don't need docker to work when running without privilege so we can probably discard this issue, but I am not sure there is a way to check if we are running in privilege mode or not.
As discussed, let's rename this repo to test-infra and provide a valid description.
a couple of prow issues I noticed:
This being said It's great already it builds with /ok-to-build without having the submitter in the org!
Before any maintenance, we need to:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.