Coder Social home page Coder Social logo

kubernetes-sigs / prow Goto Github PK

View Code? Open in Web Editor NEW
67.0 8.0 60.0 34.07 MB

Prow is a Kubernetes based CI/CD system developed to serve the Kubernetes community. This repository contains Prow source code and Hugo sources for Prow documentation site.

Home Page: https://docs.prow.k8s.io

License: Apache License 2.0

Makefile 0.06% HTML 0.66% SCSS 0.01% Shell 1.62% Mermaid 0.04% Go 94.63% TypeScript 2.48% CSS 0.39% JavaScript 0.01% Python 0.09%
k8s-sig-testing

prow's Introduction

Prow

Go Reference Go Report Card LICENSE Slack Status

Prow Logo

The source code and statically generated docs for Prow live here. Historically Prow was developed in kubernetes/test-infra along with other things, but the source code was moved here on April 9, 2024.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

prow's People

Contributors

0xmichalis avatar adshmh avatar airbornepony avatar alexnpavel avatar alvaroaleman avatar amwat avatar bentheelder avatar cblecker avatar chaodaig avatar cjwagner avatar droslean avatar fejta avatar hongkailiu avatar ibzib avatar ixdy avatar k8s-ci-robot avatar katharine avatar krzyzacy avatar matthyx avatar mirandachrist avatar mpherman2 avatar nikhita avatar petr-muller avatar qhuynh96 avatar shyamjvs avatar smg247 avatar spiffxp avatar spxtr avatar stevekuznetsov avatar yguo0905 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prow's Issues

Make all existing Prow OWNERS members of the @kubernetes-sigs org.

The Prow source code and its corresponding OWNERS files are being moved to this repo. The kubernetes/test-infra repo is currently still the source of truth, but we have begun mirroring the source here to prepare.
Code ownership should not change with this source code move, but some of the existing OWNERS file members are not yet members of the @kubernetes-sigs org. We should resolve this, ideally before completing the move, to ensure existing OWNERS have the access they need for this repo.

Example: #67 (comment)

The following users are mentioned in OWNERS file(s) but are untrusted for the following reasons. One way to make the user trusted is to add them as members of the kubernetes-sigs org. You can then trigger verification by writing /verify-owners in a comment.

  • matthyx
    • User is not a member of the org. Satisfy at least one of these conditions to make the user trusted.
  • droslean
    • User is not a member of the org. Satisfy at least one of these conditions to make the user trusted.
    • prow/OWNERS
  • smg247
    • User is not a member of the org. Satisfy at least one of these conditions to make the user trusted.
    • prow/spyglass/OWNERS
  • jmguzik
    • User is not a member of the org. Satisfy at least one of these conditions to make the user trusted.
  • AlexNPavel
    • User is not a member of the org. Satisfy at least one of these conditions to make the user trusted.
    • prow/bugzilla/OWNERS
    • prow/jira/OWNERS
    • prow/plugins/bugzilla/OWNERS

/cc @matthyx @droslean @smg247 @jmguzik @AlexNPavel
/assign @timwangmusic

Prow Issue for TFplan

Dear Team,

I am facing an issue with regards to terraform plan check test which is a mandatory check in one of our private github private repositories.
While I perform the terraform plan and terraform apply locally on my pc, this works but once the code is checked into github this test pull-tfplan fails
Could someone help me in resolving this issue as I am not able to understand from where prow is picking up this config. issue.

Error: Unsupported Terraform Core version
This configuration does not support Terraform version 0.12.29. To proceed,
either choose another supported Terraform version or update the root module's
version constraint. Version constraints are normally set for good reason, so
updating the constraint may lead to other errors or unexpected behavior.
Error: Initialization required. Please see the error message above.
Error: No configuration files
Plan requires configuration to be present. Planning without a configuration
would mark everything for destruction, which is normally not what is desired.
If you would like to destroy everything, run plan with the -destroy option.
Otherwise, create a Terraform configuration file (.tf file) and try again.

This is what I have in our .prow.yaml

- name: pull-tfplan
  decorate: true
  decoration_config:
    ssh_key_secrets:
    - prow-github-ssh-key
  always_run: true
  skip_report: false
  clone_uri: "[email protected]:VerveWireless/verve-group-infrastructure.git"
  spec:
    containers:
    - image: pubnative/ci-runner:1.0
      command: ["/bin/sh", "-c"]
      args: [
        "
        set -x;
        git-crypt unlock /etc/git-crypt/verve-group-infrastructure.gpg &&
        /bin/bash ./bin/runtf.sh
        "
      ]

Provide a way to run the website using a container

If we provide a way to run the website using a container, then users

  • don't have to install the dependencies (npm, Go, Hugo extended version, ...)
  • don't need to experience dependency-version-related hassles
  • can be assured that the same dependency versions for Netlify builds are used for their local site builds
  • don't need to upgrade/downgrade their existing dependencies, if any
  • can be provided with the consistent build env, regardless of their actual dev env
  • ...

Ref:

https://github.com/kubernetes/website#running-the-website-using-a-container

https://github.com/kubernetes/website/blob/c1c075845dad11368829d46189605ca10a52dee4/Makefile#L81-L82


And it would also be great to update the check-broken-links.sh to use htmltest container, not htmltest binary which users need to install manually. ๐Ÿ˜Š
(Related comment: #5 (comment))

pr in merge poll but not be merged

pr ref: Project-HAMi/HAMi#298

tide config:

   tide:
      merge_method:
        Project-HAMi/HAMi: squash
      queries:
      - repos:
        - Project-HAMi/HAMi
        labels:
        - lgtm
        - approved
        missingLabels:
        - needs-rebase
        - do-not-merge/hold
        - do-not-merge/work-in-progress
        - do-not-merge/invalid-owners-file
      context_options:
        # Use branch-protection options from this file to define required and optional contexts.
        # this is convenient if you are using branchprotector to configure branch protection rules
        # as tide will use the same rules as will be added by the branch protector
       from-branch-protection: true
       # Specify how to handle contexts that are detected on a PR but not explicitly listed in required-contexts,
       # optional-contexts, or required-if-present-contexts.  If true, they are treated as optional and do not
       # block a merge.  If false or not present, they are treated as required and will block a merge.
       skip-unknown-contexts: true
       orgs:
         org:
           required-contexts:
           - "check-required-for-all-repos"
           repos:
             repo:
               required-contexts:
                - "check-required-for-all-branches"
               branches:
                 branch:
                   from-branch-protection: false
                   required-contexts:
                   - "required_test"
                   optional-contexts:
                   - "optional_test"
                   required-if-present-contexts:
                   - "conditional_test"

and the logs of tide:

{"client":"git","component":"tide","file":"sigs.k8s.io/prow/pkg/git/v2/interactor.go:210","func":"sigs.k8s.io/prow/pkg/git/v2.(*interactor).Checkout","level":"info","msg":"Checking out \"c14da94889343c4a5191ce4d20b8a1661dbb2113\"","org":"Project-HAMi","repo":"HAMi","severity":"info","time":"2024-05-22T01:53:01Z"}
{"client":"git","component":"tide","file":"sigs.k8s.io/prow/pkg/git/v2/interactor.go:301","func":"sigs.k8s.io/prow/pkg/git/v2.(*interactor).MergeWithStrategy","level":"info","msg":"Merging \"cbd8b03574da6c49372ebafffc222be9564b9365\" using the \"squash\" strategy","org":"Project-HAMi","repo":"HAMi","severity":"info","time":"2024-05-22T01:53:01Z"}
{"base-sha":"c14da94889343c4a5191ce4d20b8a1661dbb2113","branch":"master","component":"tide","controller":"sync","file":"sigs.k8s.io/prow/pkg/tide/tide.go:1649","func":"sigs.k8s.io/prow/pkg/tide.(*syncController).syncSubpool","level":"info","msg":"Syncing subpool","num_prowjobs":0,"num_prs":1,"org":"Project-HAMi","repo":"HAMi","severity":"info","time":"2024-05-22T01:53:01Z"}
{"base-sha":"c14da94889343c4a5191ce4d20b8a1661dbb2113","batch-passing":null,"batch-pending":null,"branch":"master","component":"tide","controller":"sync","file":"sigs.k8s.io/prow/pkg/tide/tide.go:1658","func":"sigs.k8s.io/prow/pkg/tide.(*syncController).syncSubpool","level":"info","msg":"Subpool accumulated.","org":"Project-HAMi","prs-missing":null,"prs-passing":[298],"prs-pending":null,"repo":"HAMi","severity":"info","time":"2024-05-22T01:53:01Z"}
{"client":"github","component":"tide","controller":"sync","file":"sigs.k8s.io/prow/pkg/github/client.go:799","func":"sigs.k8s.io/prow/pkg/github.(*client).log","level":"info","msg":"Merge(Project-HAMi, HAMi, 298, {  cbd8b03574da6c49379564b9365 squash})","severity":"info","time":"2024-05-22T01:53:01Z"}
{"action":"MERGE","base-sha":"c14da94884d20b8a1661dbb2113","branch":"master","component":"tide","controller":"sync","file":"sigs.k8s.io/prow/pkg/tide/tide.go:1687","func":"sigs.k8s.io/prow/pkg/tide.(*syncController).syncSubpool","level":"info","msg":"Subpool synced.","org":"Project-HAMi","repo":"HAMi","severity":"info","targets":[298],"time":"2024-05-22T01:53:01Z"}
{"component":"tide","controller":"sync","duration":"1.528521116s","file":"sigs.k8s.io/prow/pkg/tide/tide.go:473","func":"sigs.k8s.io/prow/pkg/tide.(*syncController).Sync.func1","level":"info","msg":"Synced","severity":"info","time":"2024-05-22T01:53:02Z"}
{"SHA":"c14da94885191ce4d20b8a1661dbb2113","client":"git","component":"tide","file":"sigs.k8s.io/prow/pkg/git/v2/interactor.go:267","func":"sigs.k8s.io/prow/pkg/git/v2.(*interactor).ObjectExists","level":"info","msg":"Checking if Git object exists","org":"Project-HAMi","repo":"HAMi","severity":"info","time":"2024-05-22T01:53:03Z"}
{"SHA":"4151994b83be51edd9fe4303db2e901","client":"git","component":"tide","file":"sigs.k8s.io/prow/pkg/git/v2/interactor.go:267","func":"sigs.k8s.io/prow/pkg/git/v2.(*interactor).ObjectExists","level":"info","msg":"Checking if Git object exists","org":"Project-HAMi","repo":"HAMi","severity":"info","time":"2024-05-22T01:53:03Z"}

Stop advertising GCR images, use registry.k8s.io or similar

As discussed previously in kubernetes/test-infra#31728 (comment)

A) GCR is deprecated
B) Serving from a single region registry like this is not cost efficient. Originally it was only internal consumption by the project's own CI deployment.

If we want to publish images for external users it should really be at https://registry.k8s.io/ (note: images must go through image promotion, you cannot direct push), for internal usage it could be moved to an AR.

Prow at minimum will have to go through kubernetes/k8s.io#1343 but will be special because other affected projects (at least as far as we know) serve users through registry.k8s.io already so the migration is less pressing ...

Really, we should be promoting images, along with tagging releases, now that we have a repo dedicated to prow.

/kind bug
/priority important-soon

Restrict Prow for Users running in Environments with OPA constraints

Our team would like to investigate adding some restrictions to Prow to comply with common OPA constraints organizations may have. We would like to prevent Prow from running in privileged modes while keeping its main functionalities. This would allow users to have an option of running prow in a more business friendly manner.

Add Resource Validation to checkconfig Tool in Strict Mode

Feature Request: Add Resource Validation to checkconfig Tool in Strict Mode

Description

I wanted to check if we can add validation to the checkconfig tool to ensure all job configurations include resource requests and limits for CPU and memory. This validation will happen only in strict mode.

Example Code:

package main

import (
    "fmt"
    utilerrors "k8s.io/apimachinery/pkg/util/errors"
    v1 "sigs.k8s.io/prow/pkg/apis/prowjobs/v1"
    "sigs.k8s.io/prow/pkg/config"
)

func validateResourceRequirements(c config.JobConfig) error {
    var errs []error

    for repo, jobs := range c.PresubmitsStatic {
        for _, job := range jobs {
            errs = append(errs, validateJobResources(repo, job.JobBase))
        }
    }

    for repo, jobs := range c.PostsubmitsStatic {
        for _, job := range jobs {
            errs = append(errs, validateJobResources(repo, job.JobBase))
        }
    }

    for _, job := range c.Periodics {
        errs = append(errs, validateJobResources("periodic", job.JobBase))
    }

    return utilerrors.NewAggregate(errs)
}

func validateJobResources(repo string, job config.JobBase) error {
    var errs []error

    if job.Agent == string(v1.KubernetesAgent) {
        if job.Spec != nil && len(job.Spec.Containers) > 0 {
            for _, container := range job.Spec.Containers {
                if container.Resources.Requests == nil || container.Resources.Limits == nil {
                    errs = append(errs, fmt.Errorf("job '%s' in repo '%s' is missing resource requests or limits", job.Name, repo))
                } else {
                    if _, ok := container.Resources.Requests[v1.ResourceCPU]; !ok {
                        errs = append(errs, fmt.Errorf("job '%s' in repo '%s' is missing CPU resource requests", job.Name, repo))
                    }
                    if _, ok := container.Resources.Requests[v1.ResourceMemory]; !ok {
                        errs = append(errs, fmt.Errorf("job '%s' in repo '%s' is missing memory resource requests", job.Name, repo))
                    }
                    if _, ok := container.Resources.Limits[v1.ResourceCPU]; !ok {
                        errs = append(errs, fmt.Errorf("job '%s' in repo '%s' is missing CPU resource limits", job.Name, repo))
                    }
                    if _, ok := container.Resources.Limits[v1.ResourceMemory]; !ok {
                        errs = append(errs, fmt.Errorf("job '%s' in repo '%s' is missing memory resource limits", job.Name, repo))
                    }
                }
            }
        }
    }

    return utilerrors.NewAggregate(errs)
}

func validate(o options) error {
    // existing validation code

    // Add resource validation check in strict mode
    if o.strict {
        if err := validateResourceRequirements(cfg.JobConfig); err != nil {
            errs = append(errs, err)
        }
    }

    // existing validation code

    return utilerrors.NewAggregate(errs)
}

make docs discoverable

I can't find a link to the rendered docs in this repo or in test-infra/prow

whenever these are ready for consumption, we should add links

github allows configuring a link on the repo itself which would make sense here, I'd also suggest that we customize the README in this repo with more details and place a prominent link to this repo and the rendered docs in test-infra/prow/README.md

Suggestion: Skip Prow CI tests if PR contains changes only in `site/` directory

Currently, this repository contains both 1) Prow source code and 2) Hugo sources for Prow documentation site.
And both build tests and Netlify preview builds are being executed for each PR.
But,

  • if a PR contains changes only in site/ directory, then Prow CI tests seems not required for the PR, and
  • if a PR contains changes only in cmd/, config/, hack/, pkg/, and/or test/ directory, then Netlify preview builds seems not required for the PR.

So hereby my suggestion is that:

  • if a PR contains changes only in site/ directory, then skip Prow CI tests for the PR, and
  • if a PR contains changes only in cmd/, config/, hack/, pkg/, and/or test/ directory, then skip Netlify preview builds for the PR.

This change will exempt unnecessary Prow CI test & Netlify preview build runs.

[Feature]: Addition of a response for support related issues

I have seen several issues raising support related requests at github, wouldn't it be cool if a command is added to K8s-ci-robot/prow that will ping the author of issue with a message, whenever we put kind/support label to an issue:-

Input command:
/kind support

Output generated by bot:
@Author_of_issue, GitHub is not the right place for support or troubleshooting related requests. If you're looking for help, you could ask on [Server Fault](https://serverfault.com/questions/tagged/kubernetes). You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.

Additional helpful screenshot i found on K8s site, that can be referred

Screenshot (796)

Wrong prow status during Jenkins parallel pipeline test

We are having an issue where prow Jenkins operator is not supporting Jenkins parallel pipelines. Since migrating from ghprb, we have noticed instances where tests triggered from prow reported as passed on GitHub, while some of the parallel stages had failed/aborted on Jenkins.

The issue reproduced in the following test PR where it has parallel Jenkins job was running: Test PR . Here we can see metal3-centos-e2e-feature-test-main โ€” Jenkins job succeeded. status for parallel run even though only one test was passing and two other tests were aborted/failed. Jenkins Reproduction 1 , Jenkins Reproduction 2

Also, adding the link to the parallel test pipeline code which was working properly before the prow migration from ghprb.

We expect to have correct prow response in case of any failure in the parallel test pipeline. In the above test cases prow response status should haveFailurestatus instead of Success.

Description of this repo needs update

Description of this repo needs update, since we have Prow codes as well in this repo.

[As-is]

Statically generated docs for Prow (the tool that currently resides in kubernetes/test-infra). For historical reasons Prow lives in kubernetes/test-infra along with other things, but this repo is all about Prow.

image

[Feature]: [assign plugin] Support GitHub Organization Teams as targets for assign/cc commands

Would it be possible to support GitHub Organization Teams as targets for the assign/cc commands?

Currently, I am working on the contextual logging feature, which follows a specific review process for PRs:

/cc [@]kubernetes/wg-structured-logging-reviews

https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md#what-to-include-in-the-pull-request

However, prow does not support /cc <teams>, resulting in an error.

In most cases, the reviewers selected by the blunderbuss plugin based on the OWNERS file are sufficient. However, for PRs like the one below, the hack directory may not necessarily include members of [@]kubernetes/wg-structured-logging-reviews.
kubernetes/kubernetes#124722

If /cc <teams> were supported, I think it would simplify the review request process for above case.
(This is because there would be no need to send review requests to specific individuals within the team.)

Implementation proposal:

I would appreciate your thoughts on this issue.

/kind feature

Forking Prow

We are moving the /prow directory from k8s.io/test-infra repo to this repo.

The migration should satisfy the following requirements:

  • Developers use a dedicated Prow repository on GitHub.
  • The Prow repository must not depend on libraries in the test-infra repo.
  • Prow commit history is retained in the new repository.
  • Presubmit and postsubmit Prow jobs such as image publishing will run on the new repository.

Migrate Prow-related labels from `k/t-i` repo

It would be great to migrate Prow-related labels from k/t-i repo.

[As-is]
According to https://github.com/kubernetes/test-infra/blob/master/label_sync/labels.yaml,
These labels exist in kubernetes/test-infra repo:

  1. area/prow
  2. area/prow/branchprotector
  3. area/prow/bump
  4. area/prow/clonerefs
  5. area/prow/crier
  6. area/prow/config-bootstrapper
  7. area/prow/deck
  8. area/prow/entrypoint
  9. area/prow/gcsupload
  10. area/prow/gerrit
  11. area/prow/hook
  12. area/prow/horologium
  13. area/prow/initupload
  14. area/prow/jenkins-operator
  15. area/prow/knative-build
  16. area/prow/mkpj
  17. area/prow/mkpod
  18. area/prow/peribolos
  19. area/prow/phony
  20. area/prow/plank (Note: plank has been deprecated.)
  21. area/prow/plugins
  22. area/prow/pubsub
  23. area/prow/sidecar
  24. area/prow/sinker
  25. area/prow/splice
  26. area/prow/status-reconciler
  27. area/prow/spyglass
  28. area/prow/tide
  29. area/prow/tot
  30. area/prow/pod-utilities

[Option 1]
Migrate Prow-related labels as-is:

  1. area/prow
  2. area/prow/branchprotector
  3. area/prow/bump
  4. area/prow/clonerefs
  5. area/prow/crier
  6. area/prow/config-bootstrapper
  7. area/prow/deck
  8. area/prow/entrypoint
  9. area/prow/gcsupload
  10. area/prow/gerrit
  11. area/prow/hook
  12. area/prow/horologium
  13. area/prow/initupload
  14. area/prow/jenkins-operator
  15. area/prow/knative-build
  16. area/prow/mkpj
  17. area/prow/mkpod
  18. area/prow/peribolos
  19. area/prow/phony
  20. area/prow/plank (Note: plank has been deprecated.)
  21. area/prow/plugins
  22. area/prow/pubsub
  23. area/prow/sidecar
  24. area/prow/sinker
  25. area/prow/splice
  26. area/prow/status-reconciler
  27. area/prow/spyglass
  28. area/prow/tide
  29. area/prow/tot
  30. area/prow/pod-utilities

[Option 2]
Since this repo is all about Prow, so remove the prow prefix from all labels, and the area/prow label itself:

  1. area/prow
  2. area/branchprotector
  3. area/bump
  4. area/clonerefs
  5. area/crier
  6. area/config-bootstrapper
  7. area/deck
  8. area/entrypoint
  9. area/gcsupload
  10. area/gerrit
  11. area/hook
  12. area/horologium
  13. area/initupload
  14. area/jenkins-operator
  15. area/knative-build
  16. area/mkpj
  17. area/mkpod
  18. area/peribolos
  19. area/phony
  20. area/plank (Note: plank has been deprecated.)
  21. area/plugins
  22. area/pubsub
  23. area/sidecar
  24. area/sinker
  25. area/splice
  26. area/status-reconciler
  27. area/spyglass
  28. area/tide
  29. area/tot
  30. area/pod-utilities

Which option seems better? Any opinions are welcome!

Note: I don't know whether label_sync is enabled for this repo (k-s/prow). If not, then we need to enable as well.

Note: We need to add OWNERS file to each directory, to enable Prow's auto-labeling.
(The PR #101 affects the location of OWNERS file that will be added.)

/assign

Document using Tekton Pipelines as prow jobs

I understand from kubernetes/test-infra#11888 that prow is able to run tekton pipelines as prow jobs.
However, it does not look like it's documented in docs.prow.k8s.io ("Tekton" search does not return any results)

From the PR conversation, it looks like a follow-up PR was planned, but never materialized (I guess that's the habitual fate of planned documentation ๐Ÿ˜„ ).

It looks like tekton has some docs on that ( https://github.com/tektoncd/plumbing/blob/main/docs/prow.md#tekton-pipelines-with-prow ). But as far as I understand, it applies to their prow instance, not in general.

kubernetes/test-infra#13874 seems also relevant, in particular the third comment, but I'm not sure how much has changed since that ?

@Gregory-Pereira you might be interested.

Migrate existing Prow docs over to this repo

Context: kubernetes/test-infra#24821 (comment) and this design doc

The existing docs site at https://docs.prow.k8s.io is rather bare-bones and needs more content. Luckily, a lot of content already exists, in the k/t-i repo (look for Markdown files). We need to identify content that needs to be migrated over, create a migration task list for each of those content areas, and then one-by-one do the actual migration.

Note that any content that is migrated should probably be deleted from k/t-i, so this would require edits to both k/t-i and this repo.

[blunderbuss plugin]: Modify to prevent additional reviewers from being assigned

While it is a rare case, when a PR is created and then changed to draft status, followed by being changed back to undraft status, blunderbuss assigns additional reviewers even though it has already assigned reviewers at the time of PR creation.
Examples of this behavior can be observed in the following PR:

Therefore, it seems necessary to modify the behavior so that additional automatic assignments are not made when the required reviewers have already been assigned.

/assign
/king bug

branchprotector does not honor CTRL-C

Shortly: branchprotector does not honor CTRL-C and if killed, may leave threads running on the background.

I did a dry-run as suggested by the docs and wanted to cancel it as it took so long to run.

Issue:

  1. it doesn't honor CTRL-C, it does nothing
  2. if suspended by CTRL-Z, and killed, it leaves worker threads behind, which keep running and spamming until separately killed

How to reproduce:

go run ./prow/cmd/branchprotector \
  --config-path=/path/to/config.yaml \
  --github-token-path=/path/to/my-github-token

Our prow config for reference.

Provide additional response when external-plugin (cherrypick) fails

When using cherrypick plugin if there are any failures those needs to be surfaced on the PR with appropriate response. In this case when the diff for a PR is too large there is no response provided. Need to manually check the logs:

{"component":"cherrypicker","error":"failed to get patch: status code 406 not one of [200], body: {\"message\":\"Sorry, the diff exceeded the maximum number of files (300). Consider using 'List pull requests files' API or locally cloning the repository instead.\",\"errors\":[{\"resource\":\"PullRequest\",\"field\":\"diff\",\"code\":\"too_large\"}],\"documentation_url\":\"https://docs.github.com/rest/pulls/pulls#list-pull-requests-files\"}","event-GUID":"3567fe10-178d-11ef-8d06-fcff71c12be5","event-type":"issue_comment","file":"sigs.k8s.io/prow/cmd/external-plugins/cherrypicker/server.go:151"

gcr.io/k8s-prow: some of the images don't seem to get pushed after move to dedicated repo

tl;dr

it seems that some prow images aren't getting pushed to gcr.io/k8s-prow any more,
i.e. we noticed that the commenter image is missing for tag v20240418-4c9d8ca12. Same goes for tag v20240410-4be743f3e

long story

We (the https://github.com/kubevirt GitHub org) run our own prow instance https://prow.ci.kubevirt.io/. We have automation in place that updates the prow image tags in our infrastructure repository https://github.com/kubevirt/project-infra for KubeVirt CI in periodic intervals.

Since the move of prow into it's dedicated repository kubernetes/test-infra#31728 some of the images don't get pushed any more.

The below is a grep of the prow images we are using currently, combined with a listing of the tags available from the image repository.

$ for image in $(git grep -oE 'gcr.io/k8s-prow/[a-z0-9_]+' -- ./github | \
    sed 's#.*:#docker://#' | sort -u); \
  do \
      echo "$image has $(skopeo list-tags $image | grep '"v20240418-4c9d8ca12' | wc -l) images with tag v20240418-4c9d8ca12"; \
  done
docker://gcr.io/k8s-prow/branchprotector has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/checkconfig has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/cherrypicker has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/clonerefs has 4 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/commenter has 0 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/configurator has 0 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/crier has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/deck has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/entrypoint has 4 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/exporter has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/ghproxy has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/grandmatriarch has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/hook has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/horologium has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/initupload has 4 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/label_sync has 0 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/needs has 0 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/peribolos has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/pipeline has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/prow has 0 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/sidecar has 4 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/sinker has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/status has 0 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/tide has 1 images with tag v20240418-4c9d8ca12
docker://gcr.io/k8s-prow/tot has 1 images with tag v20240418-4c9d8ca12

Note: this list might not be complete wrt all the prow images pushed though

/cc @timwangmusic @cjwagner

Affected PRs that we had to revert:

Implement doc checkers

Reference: kubernetes/test-infra#24821 (comment)

As per the design doc, we should implement automated checkers to make sure we uphold minimum quality standard for both existing documentation as well as newly proposed documentation.

There have been some discussions in the design doc already, but feel free to comment on here about which checker(s) to implement first.

Mass job status change API

Original issue: kubernetes/test-infra#30108

What would you like to be added:
A simple API to mass update statuses of jobs based on filtering criteria:

  • cluster
  • state
  • started_after
  • started_before
  • job type

There would be a single endpoint that would accept the above criteria, and find and update all matching ProwJobs to the provided state.

Why is this needed:
OpenShift recently experienced an outage in one of the build clusters in our Prow instance. Due to this, we had a large number of jobs stuck in the triggered state. We could have made good use of something that allowed us to abort all of these jobs. I think a general purpose API to update the status of jobs would be useful for other purposes as well, but the most salient use case would be mass-abort by criteria.

Question: default job + in_repo_config override

Hi.

I hope this is a good place to ask a question about prow functionality (and documentation) ? (Please point me to more appropriate channels if not !)


Can I define a ProwJob in the prow config, and override it in managed repo using in_repo_config and a .prow.yaml file ?
The usecase being to have a baseline jobs and have some particular repo override it because of special needs:

for example : prow config:

in_repo_config:
  enabled:
    my_org: true
presubmits:
  my_org:
  - name: pull-validate
    always_run: true
    decorate: true
    spec: {} # whatever

in some repos under my_org:

presubmits:
- name: pull-validate
  always_run: true
  decorate: true
  spec: {} # more specific

The current doc does not mention this, maybe it's in another place ? My online search didn't find either regarding prowjob name conflict which could be relevant, but I might have missed it.

Thanks !

Spyglass Buildlogs Limit Filesize

Spyglass can load larger files, but it can take quite a while for larger build logs to load. Can we add a check that if the build log is larger than a certain size (maybe 5MB), we provide a link to the raw buildlog while we continue to load the large file? Or maybe we can pre-load the Raw build-log.txt link?

`tide` not honoring multiple reviewer branch protection

Migrated from kubernetes/test-infra#23031 reported by @dhaiducek :

What happened:

  • With this organization-level tide configuration:
tide:
  queries:
  - labels:
    - approved
    - lgtm
    missingLabels:
    - 'dco-signoff: no'
    - do-not-merge/hold
    - do-not-merge/invalid-owners-file
    - do-not-merge/work-in-progress
    - needs-rebase
   orgs:
   - <org>
  • Set reviewers in branch-protection for a particular repo:
branch-protection:
  orgs:
    <org>:
      repos:
        <repo>:
          protect: true
          required_pull_request_reviews:
            dismiss_stale_reviews: true
            required_approving_review_count: 2
  • tide merges PRs without the second review (or any review, really--it'll merge with /lgtm without an actual review)

What you expected to happen:

  • With branch protection rules in place, I'd expect tide to respect the reviewer count and not merge PRs until then

How to reproduce it (as minimally and precisely as possible): (configuration above)
Please provide links to example occurrences, if any:
Anything else we need to know?:

  • I found a configuration for tide queries, and was wondering what this was and whether this was the missing piece for honoring reviewers or if multiple reviewers was not currently a feature of tide:
tide:
  queries:
    reviewApprovedRequired: true

exclude closed PRs from needs-triage labeling

closer PRs can be in a state that is not needs-triage nor triage/accepted

right now we have triage team(s) applying "triage/accepted" to clear the triage queue, but what we really need is to signal something was not "accepted" but should not be triaged anymore

unfortunately removing triage/accepted just ... adds back needs-triage.

@kubernetes/sig-contributor-experience

Support approve github workflow

Now, the label of ok-to-test only working for prow job, and this issue is talking about let ok-to-test also working for github workflow, it's mean always approve github action running when the PR have label of ok-to-test.

I remember that there was a corresponding discussion in the kubernetes/test-infra, but I can't find it now.

I think it can be a prow plugin.

/kind feature

Errors occur when trying to run the website locally

I tried to run the website locally, but some errors occurred.

[env]

[prereq]

  • make init-theme
  • make update-theme

[cmd & output]

โฏ hugo server -v                                     
Start building sites โ€ฆ 
hugo v0.91.2+extended linux/amd64 BuildDate=unknown
INFO 2022/11/11 16:15:24 syncing static files to /
ERROR 2022/11/11 16:15:24 render of "page" failed: execute of template failed: template: docs/single.html:30:7: executing "docs/single.html" at <partial "scripts.html" .>: error calling partial: "/home/jhseo/prow/site/themes/docsy/layouts/partials/scripts.html:67:107": execute of template failed: template: partials/scripts.html:67:107: executing "partials/scripts.html" at <resources.Concat>: error calling Concat: slice []interface {} not supported in concat
ERROR 2022/11/11 16:15:24 render of "page" failed: execute of template failed: template: docs/single.html:30:7: executing "docs/single.html" at <partial "scripts.html" .>: error calling partial: "/home/jhseo/prow/site/themes/docsy/layouts/partials/scripts.html:67:107": execute of template failed: template: partials/scripts.html:67:107: executing "partials/scripts.html" at <resources.Concat>: error calling Concat: slice []interface {} not supported in concat
ERROR 2022/11/11 16:15:24 render of "page" failed: execute of template failed: template: docs/single.html:30:7: executing "docs/single.html" at <partial "scripts.html" .>: error calling partial: "/home/jhseo/prow/site/themes/docsy/layouts/partials/scripts.html:67:107": execute of template failed: template: partials/scripts.html:67:107: executing "partials/scripts.html" at <resources.Concat>: error calling Concat: slice []interface {} not supported in concat
ERROR 2022/11/11 16:15:24 render of "page" failed: execute of template failed: template: docs/single.html:30:7: executing "docs/single.html" at <partial "scripts.html" .>: error calling partial: "/home/jhseo/prow/site/themes/docsy/layouts/partials/scripts.html:67:107": execute of template failed: template: partials/scripts.html:67:107: executing "partials/scripts.html" at <resources.Concat>: error calling Concat: slice []interface {} not supported in concat
ERROR 2022/11/11 16:15:24 failed to render pages: render of "page" failed: execute of template failed: template: docs/single.html:30:7: executing "docs/single.html" at <partial "scripts.html" .>: error calling partial: "/home/jhseo/prow/site/themes/docsy/layouts/partials/scripts.html:67:107": execute of template failed: template: partials/scripts.html:67:107: executing "partials/scripts.html" at <resources.Concat>: error calling Concat: slice []interface {} not supported in concat
Error: Error building site: TOCSS: failed to transform "scss/main.scss" (text/x-scss): SCSS processing failed: file "stdin", line 6, col 1: File to import not found or unreadable: ../vendor/bootstrap/scss/bootstrap. 
Built in 169 ms

Tide blocks PRs which have checks with "skipped" status

Since a few days (I think since the weekend) tide is not merging PRs with skipped actions anymore.

It reports: Pending โ€” Not mergeable. Job Approve ok-to-test has not succeeded.

For full context, see: https://kubernetes.slack.com/archives/C09QZ4DQB/p1713803378307299

It looks like GitHub changed the "check-runs" API to additionally return a "skipped" status which is not handled in

prow/pkg/tide/tide.go

Lines 2133 to 2139 in a25fe4d

if checkRun.Conclusion == checkRunConclusionNeutral || checkRun.Conclusion == githubql.String(githubql.StatusStateSuccess) {
context.State = githubql.StatusStateSuccess
return context
}
context.State = githubql.StatusStateFailure
return context

Some referenced:

image

This will affect a few more repositories as just this action alone was copy&pasted across a few repos: ~ https://cs.k8s.io/?q=%27ok-to-test%27&i=nope&files=&excludeFiles=&repos=

FontAwesome icons are not displayed

[env]

  • Windows 10 64-bit + Google Chrome 107.0.5304.107 64-bit // Microsoft Edge 107.0.1418.56 64-bit
  • Ubuntu 18.04 desktop 64-bit + Firefox 107.0 64-bit // Google Chrome 96.0.4664.45 64-bit

image
image


On the other hand, those are displayed correctly in Netlify preview and local preview

image
image


My guesses are:

Umbrella issue to revise/add contents

Ref: #4 (comment)

The final step is to update/reorganize/rearrange the contents from the Legacy Snapshot. However we can also just add new content on the side, and later mark things in the Legacy Snapshot as being "dated". The one benefit of the Legacy Snapshot is that at least we get a table-of-contents of every single Prow markdown file, which was previously only discoverable via grepping through the sources (or through enough determined mouse clicks). So in this sense it's something that can happen over a longer period of time. Most likely things will happen organically here.

Anyone that can edit this issue body might update the task list below.

[Common TODO: check, and revise if needed]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.