Coder Social home page Coder Social logo

kubernetes / k8s.io Goto Github PK

View Code? Open in Web Editor NEW
678.0 678.0 767.0 432.45 MB

Code and configuration to manage Kubernetes project infrastructure, including various *.k8s.io sites

Home Page: https://git.k8s.io/community/sig-k8s-infra

License: Apache License 2.0

Makefile 1.60% Python 4.05% Shell 33.02% Dockerfile 1.27% Go 12.28% HTML 0.01% HCL 46.52% Open Policy Agent 0.22% PLpgSQL 0.07% VCL 0.96%
k8s-sig-infra kubernetes

k8s.io's Introduction

k8s.io

Kubernetes project infrastructure, managed by the kubernetes community via sig-k8s-infra

  • apps: community-managed apps that run on the community-managed aaa cluster
  • artifacts: non-image artifacts published to artifacts.k8s.io
  • audit: scripts to export all relevant gcp resources, and the most recently-reviewed export
  • dns: DNS for kubernetes.io and k8s.io
  • groups: google groups on the kubernetes.io domain
  • hack: scripts used for development, testing, etc.
  • images: container images published to gcr.io/k8s-staging-infra-tools
  • infra/gcp: scripts and data to manage our GCP infrastructure
    • bash/namespaces: scripts and data to manage K8s namespaces and RBAC for aaa
    • bash/prow: scripts and data used to manage projects used for e2e testing and managed by boskos
    • bash/roles: scripts and data to manage custom GCP IAM roles
    • terraform/modules: terraform modules intended for re-use within this repo
    • terraform/projects: terraform to manage (parts of) GCP projects
  • k8s.gcr.io: container images published by the project, promoted from gcr.io/k8s-staging-* repos
  • policy: open policy agent policies used by conftest to validate resources in this repo
  • registry.k8s.io: work-in-progress to support cross-cloud mirroring/hosting of containers and binaries

We provide a publicly viewable billing report accessible to members of [email protected]

Please see https://git.k8s.io/community/sig-k8s-infra for more information

k8s.io's People

Contributors

aledbf avatar ameukam avatar aramase avatar bentheelder avatar bobymcbobs avatar cblecker avatar cncf-ci avatar cpanato avatar dims avatar hakman avatar ixdy avatar jimdaga avatar johngmyers avatar justaugustus avatar justinsb avatar k8s-ci-robot avatar k8s-infra-ci-robot avatar listx avatar mboersma avatar palnabarun avatar pkprzekwas avatar puerco avatar saschagrunert avatar sbueringer avatar spiffxp avatar strongjz avatar thockin avatar upodroid avatar vincepri avatar xmudrii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s.io's Issues

Provide firebase accounts for subproject docs

In the cluster-api meeting today we discussed how best to do docs for subprojects, and there was a general preference for using firebase - except that it isn't free. Is this something we can do under the CNCF wg-infra umbrella?

cc @neolit123 and @davidewatson for visibility and possibly to provide some background context.

move k8s.io redirector behind Ingress and automate TLS using cert-manager

The TLS cert for the k8s.io redirector expires in about a week (April 11). Let's get it automated!

I've already got cert-manager set up in the cluster, so now it's just a matter of getting the redirector running behind an Ingress without breaking anything.

Steps to get there:

  1. Teach nginx how to handle forwarded HTTPS requests from the GCP Ingress (since it won't terminate HTTPS directly in this case).
  2. Create an Ingress to the existing LoadBalancer service, using a new static IP as the existing static IP is still bound to the LoadBalancer. Use the existing soon-to-expire TLS cert to terminate HTTPS on the Ingress.
  3. Update DNS records to point to the new static IP and wait for propagation.
  4. Switch the service to NodePort, releasing the old static IP.
  5. Create cert-manager Certificate resources - use the Let's Encrypt staging endpoint for canary, and prod endpoint for prod. (Note: because of the large number of subdomains, this creates a lot of backend services at once (~50). We'll need to increase our backend service quota first.)
  6. After the TLS cert has been created (may take some time), update the Ingress to use the new secret. (I suggest using a new name here, since we'll be using different Let's Encrypt endpoints for canary and prod.)
  7. Switch off HTTPS support in nginx and delete the old certificate.
  8. Hopefully never think about this again. :)

/assign
cc @thockin @munnerz @nikhita

Update TLS cert for most of the aliases

A wildcard cert would work but I'm assuming that it is hard to get one of those. Right now the cert for k8s.io has a set of SANs:

k8s.io
changelog.k8s.io
code.k8s.io
docs.k8s.io
examples.k8s.io
get.k8s.io
issue.k8s.io
issues.k8s.io
jenkins.k8s.io
pr.k8s.io
prs.k8s.io
rel.k8s.io
releases.k8s.io
slack.k8s.io
submit-queue.k8s.io
www.k8s.io

This is missing:

yum.k8s.io
apt.k8s.io
dl.k8s.io
features.k8s.io
feature.k8s.io
reviewable.k8s.io
testgrid.k8s.io

Create IAM Dump scripts

As a external public consumer of kubernetes
In order to have confidence in the k8s-infrastructure
I want to be able to see a history of changes in the configuration and permissions of the CNCF GCP organization and kubernetes-public project

Given a requested change to the configuration
When the change is completed
Then the change will show up as a commit to the k8s.io repo

I'm put some quick notes https://github.com/ii/org/blob/master/k8s.io/kubernetes/k8s.io/iam.dump.md and will revisit this next week.

Thank you to @thockin for the pair session and for creating the necessary group and permissions for auditing.

Provide the easy-to-remember url addons.k8s.io for applying addons

Given the work we're trying to do in @kubernetes/sig-cluster-lifecycle, and given that we've chosen to use kubectl apply -f as the main addon applier, it would be great to have a easy-to-remember url for applying addons, like this:

kubectl apply -f addon(s).k8s.io/network/flannel.yaml
kubectl apply -f addon(s).k8s.io/network/calico.yaml
kubectl apply -f addon(s).k8s.io/network/weave.yaml

kubectl apply -f addon(s).k8s.io/core/dns.yaml
kubectl apply -f addon(s).k8s.io/core/dashboard.yaml
kubectl apply -f addon(s).k8s.io/core/heapster.yaml
...

or just

kubectl apply -f addon(s).k8s.io/flannel.yaml
kubectl apply -f addon(s).k8s.io/calico.yaml
kubectl apply -f addon(s).k8s.io/weave.yaml

kubectl apply -f addon(s).k8s.io/dns.yaml
kubectl apply -f addon(s).k8s.io/dashboard.yaml
kubectl apply -f addon(s).k8s.io/heapster.yaml
...

This would be very user-friendly, and we should document the possible addons installable this way at the main site.

The more exotic addons could (and should) be kept away from this way of installing the addons, only the most common should be here.

We could just have a redirection file in this repo that points to the source of truth somewhere in the main k8s repo or at other places.

@kubernetes/sig-cluster-lifecycle @thockin @pwittrock

Downloading k8s.io go packages

I'm not sure this is the right place to ask, but it seems like the closest I can get to where the issue probably stems from, so here goes.

I'm trying to make a Homebrew (macOS package manager) package out of a go tool I wrote. In order to do this, I mark packages as dependencies in the package config file for Homebrew.

This looks like this:

go_resource "github.com/spf13/pflag" do
  url "https://github.com/spf13/pflag",
      :revision => "e57e3eeb33f795204c1ca35f56c44f83227c6e66"
end

go_resource "golang.org/x/crypto" do
  url "https://golang.org/x/crypto",
      :revision => "7d9177d70076375b9a59c8fde23d52d9c4a7ecd5"
end

go_resource "gopkg.in/yaml.v2" do
  url "https://gopkg.in/yaml.v2",
      :revision => "eb3733d160e74a9c7e442f435eb3bea458e1d19f"
end

go_resource "k8s.io/apimachinery" do
  url "https://k8s.io/apimachinery",
      :revision => "1168e538ea3ccf444854d1fdd5681d2d876680a7"
end

When installing the dependencies using Homebrew, things work as expected, except for packages prepended with k8s.io:

==> Downloading https://github.com/spf13/pflag
######################################################################## 100.0%
Warning: Cannot verify integrity of kubecrt--github.com-spf13-pflag-0.6.0
A checksum was not provided for this resource
For your reference the SHA256 is: cac4d7d9b0cf76aa2cb2277b2f3c120d35ce56fba36f050d54b828d6eefac136
==> Downloading https://golang.org/x/crypto
######################################################################## 100.0%
Warning: Cannot verify integrity of kubecrt--golang.org-x-crypto-0.6.0
A checksum was not provided for this resource
For your reference the SHA256 is: 237552c607b93d6e682100183d059b6f104a07b5ec4958148c3db9aaf0a0a09d
==> Downloading https://gopkg.in/yaml.v2
######################################################################## 100.0%
Warning: Cannot verify integrity of kubecrt--gopkg.in-yaml.v2-2.v2
A checksum was not provided for this resource
For your reference the SHA256 is: cb3c370a44d64e289fbc6c802b4e7eee1f4b800cd315bac15c111163b6671206
==> Downloading https://k8s.io/apimachinery
==> Downloading from https://kubernetes.io/apimachinery

curl: (22) The requested URL returned error: 404 
Error: Failed to download resource "kubecrt--k8s.io-apimachinery"
Download failed: https://k8s.io/apimachinery

Looking at this repository, it seems there's an Nginx instance behind k8s.io that does some rewriting/redirecting logic based on provided request details.

Since go get k8s.io/apimachinery works as expected, I'm wondering if what I want to do is even possible, or not, but I just wanted to point this out, and see if there's a way to make this work.

Now, after hitting this error, I went and refactored my package, so now it only depends on the official Golang package manager dep, and I use that to install the required dependencies. In the end, this is the way to go, but I was still interested in the above issues, and wanted to report them here, in case this is something you want to tackle.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

https broken on docs.k8s.io and docs.kubernetes.io redirectors

We're currently using DNS challenges with Let's Encrypt, which basically means we create a TXT record at _acme-challenge.SUBDOMAIN.DOMAIN., e.g. _acme-challenge.docs.kubernetes.io. for docs.kubernetes.io.

Unfortunately, we recently added wildcard CNAME records for *.docs.kubernetes.io and *.docs.k8s.io, which means that these TXT records aren't resolving and we can't verify these domains.

The certificate is expiring, and I don't have a great solution at the moment. We might need to use another challenge method, or we should probably investigate switching to a wildcart cert, now that Let's Encrypt supports those.

[Umbrella issue] Automate DNS for Kubernetes assets

Please see:

Notes from meeting:

  • [Aaron] Where are we on the roadmap
    • [Tim] Need 3 things: 1 - Tests with canaries. 2 - Figure out billing transparency. 3 - Alerts.
  • [Aaron] Where does it run?
    -[Tim] - Google Cloud DNS
  • [Aaron] Where is OctoDNS running?
    -[Tim] Manually / locally using a Docker build, until we have a cluster up and running on a regular basis.
    -[Tim] To make changes, PR those changes, and those with access to the GCP project approve and run OctoDNS
  • [Brendan] Might need to track changes if people have to run it post PR merge.
  • [Aaron] - We need some docs for process
  • [Justin] - Can we automate this?
    • [Aaron] Maybe copy Prow process for k/org

DNS REQUEST: kind.sigs.k8s.io subproject docs

Type of DNS update: Create

one of: Create, Delete, Update

Domain being modified: k8s.io

e.g. k8s.io

Existing DNS Record:

NONE

New DNS Record:

# https://github.com/kubernetes-sigs/kind docs (@bentheelder, @munnerz)
kind.sigs:
  type: CNAME
  value: kindocs.netlify.com

Reason for update:

The SIG-Testing kind subproject has been working on subproject documentation using tooling aligned with SIG-Docs, currently at https://kindocs.netlify.com/ under a Kubernetes-owned Netlify team.

Ideally we'd like to start using a provider independent URL, kind.sigs.k8s.io aligns with our current API namespace, in the future if this pattern works well other subprojects could potentially follow.

The record above follows https://www.netlify.com/docs/custom-domains/#manual-no-www

If you are planning to set your siteโ€™s custom domain to a non-www subdomain, like blog.yourdomain.com, most of the above instructions wonโ€™t be as applicable to you. The things to make sure of are:

  1. You should use a CNAME DNS record for that subdomain to point to your [your-site-name].netlify.com so you can use the CDN.
  2. You donโ€™t need a www.subdomain.yourdomain.com DNS entry.
  3. You probably want to stick with your external DNS host. If you use Netlify DNS, youโ€™ll be hosting yourdomain.com with us, not subdomain.yourdomain.com. Netlify DNS can only manage entire DNS zones for root domains.

cc @munnerz @chenopis

Use tide for PR merging

This is a core repository. As such, it needs to use the same merge automation as the rest of the project

I have a PR open that will address this at an org-wide level: kubernetes/test-infra#9342. It will:

  • enable the approve plugin, to allow use of the /approve plugin
  • enable the blunderbuss plugin, to assign reviews based on OWNERS files
  • add all repos in the kubernetes org to tide's query

Can one of the repo maintainers here drop an LGTM (or objections) on the linked PR? Alternatively, if I hear no objections by Monday 10am PT of next week, I will merge the PR.

get k8s.io/Kubernetes failed

Hi,
i met one issue when i get the kubernets from k8s.io.
media@media-desktop:~/k8s$ go get k8s.io/kubernetes
package k8s.io/kubernetes: unrecognized import path "k8s.io/kubernetes" (https fetch: Get https://k8s.io/kubernetes?go-get=1: proxyconnect tcp: EOF)

is there any env setup errors? thanks very much.

Manage Google Group memberships via API

If we use the kubernetes.io gsuite, we should be able to manage group creation and membership via an API.

We should be able to create a service account, and then delegate it the following scopes via domain-wide delegation:

With these, we should be able to script creation of groups, administer their settings, and modify membership of them. It appears that we should be able to do this just with Gsuite Basic, without adding on any extra options/services/cost.

It appears there are also SDK libraries to get started:
https://godoc.org/google.golang.org/api/admin/directory/v1
https://godoc.org/google.golang.org/api/groupssettings/v1

Automate updates for k8s.io redirector service

Currently, changes to the k8s.io redirector service (i.e any changes to the configs under the k8s.io subdirectory) require @ixdy or @thockin to manually update the cluster, basically following something like the following process:

cd k8s.io/
kubectl -n k8s-io-canary apply -f configmap-nginx.yaml
# pick up new configs by forcing nginx to restart
kubectl -n k8s-io-canary scale deployment k8s-io --replicas=0
kubectl -n k8s-io-canary scale deployment k8s-io --replicas=1
TARGET_IP=[canary namespace service IP] make test

# if tests pass, deploy to production
kubectl -n k8s-io-prod apply -f configmap-nginx.yaml
# pick up new configs by forcing nginx to restart
kubectl -n k8s-io-prod scale deployment k8s-io --replicas=0
# note we scale back to 2, not 1
kubectl -n k8s-io-prod scale deployment k8s-io --replicas=2
# verify everything on prod
make test

There are lots of steps to automate here:

  • restarting nginx manually
  • testing on the canary namespace before updating the prod namespace
    • ideally we test before merge even, which is currently a manual process
  • requiring a human to deploy changes (rather than automatically updating on merge)

Run publisher-bot in CNCF infra

https://github.com/kubernetes/publishing-bot is currently running in a OpenShift based cluster inside RedHat. We should move it to a CNCF sponsored GKE cluster.

To try the bot out, we can use the kubernetes-nightly github org. To populate the repos in this org, we can use:

hack/fetch-all-latest-and-push.sh kubernetes-nightly

To run the bot you can use:

make build-image push-image deploy CONFIG=configs/kubernetes-nightly

Note that you will need to be added to get access to the github token (we use git crypt):

https://github.com/kubernetes/publishing-bot/pull/147

You will also need to be added as admin in the kubernetes-nightly github org by @sttts. Once you are able to run the bot against kubernetes-nightly and the job is stable, we can promote to running the bot against kubernetes target-org

We need a GSuite for sharing SIG/WG documents

Context : kubernetes/community#3159

Some docs are in personal google accounts, we need to move them somewhere better.

Example of a problem from a slack discussion:

lavalamp [12:58 PM]
yeah if you can make a gsuite available I'll be super happy, I'm constantly spammed by access requests from people who don't realize they just have to join a group

Also see https://groups.google.com/a/kubernetes.io/forum/?utm_medium=email&utm_source=footer#!msg/steering/my_roZIKuXo/I-pjrrvrDwAJ

Burn down and recreate k8s-infra cluster

Last we talked, the original manually created cluster had not yet been burned down. We want to burn it down, and use a script to stand up a cluster.

umbrella ref: #152

/assign @thockin @justinsb @hh
/wg k8s-infra

TBD: what... is actually being stood back up?

  • accounts / iam policies?
  • nodes
  • RBAC roles/policies
  • infra running on the cluster

Tests fail with SSL verification error

The tests fail with the following error

ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)

(both make test and make docker-test)

As far as we understand, this is because nginx uses a self signed certificate and the https client doesn't have this self signed certificate in its certificate store.

renaming k8s-infra-team to wg-k8s-infra

Now that our charter has landed (ref: kubernetes/community#2830), and we're still in early days, let's rename assets from k8s-infra-team to wg-k8s-infra:

slack channel

  • #k8s-infra-team - > wg-k8s-infra

This is a transparent rename, need to ask #slack-admins

public google group

This is a hard break, suggest e-mailing list to let them know of the move, and then doing the move.

k/community docs

update references in here once the slack channel and public google group is renamed

admin groups

@thockin we only have a few references in the extant GCP project. If you create new google groups for [email protected] and [email protected] I can replicate the config then remove the old groups. We have some docs to update too in DNS

Honestly I'm torn on this, part of me says do it so everything is the same. Part of me says k8s-infra is good enough since there are questions of whether this stays as a wg or may one day become a sig (or a committee, or who knows)

/assign @spiffxp
I'm willing to take on all but admin groups

/assign @dims @thockin
WDYT about the admin groups?

/wg k8s-infra
/kind cleanup

[Umbrella Issue] Project artifact management task list

This is an umbrella issue covering the tasks needed to manage, distribute and promote container images. Once this issue is complete people will have a single URL that they can use to pull each image no matter where they are in the world. Projects will have a known way to push and promote their images.

Tasks:

  • Design Document describing how all this will work.
  • GCR Repository for official published images
  • GCR Repository for projects to use (#158)
  • GCS Bucket for projects to use (#153)
  • Process for image promotion (#157)
  • ECR Repository mirror for Amazon
  • ACR Repository mirror for Azure
  • HTTP Redirector for image and artifact renaming.

Retire dockerhub kubernetes org and delete all associated images

In response to the recent Docker Hub breach we have rotated credentials on https://hub.docker.com/u/kubernetes, and removed all repositories contained therein.

None of the kubernetes GitHub orgs use Docker Hub for building images. We do not host official build artifacts at Docker Hub. The repositories that lived at https://hub.docker.com/u/kubernetes had not been updated in anywhere from 2-3 years. Whether these images have migrated on to better homes, what they were built from, and how they are used is a mystery. They are effectively ownerless and their continued existence is a liability.

image last updated
kubernetes/cassandra 2015-11-14
kubernetes/cauldron 2015-11-14
kubernetes/cluster-insight 2015-08-28
kubernetes/elasticsearch 2015-11-14
kubernetes/eptest 2015-11-14
kubernetes/etcd 2015-11-14
kubernetes/example-guestbook-php-redis 2015-11-13
kubernetes/flake 2015-11-14
kubernetes/fluentd-elasticsearch 2015-11-14
kubernetes/fluentd-gcp 2015-11-14
kubernetes/guestbook 2015-11-13
kubernetes/heapster_grafana 2016-01-14
kubernetes/heapster_influxdb 2015-12-11
kubernetes/heapster 2017-01-10
kubernetes/kibana4 2015-11-14
kubernetes/kibana 2015-11-13
kubernetes/kube2sky 2015-11-12
kubernetes/liveness 2015-11-13
kubernetes/mariadb 2015-11-14
kubernetes/mounttest 2015-11-14
kubernetes/nettest 2015-11-13
kubernetes/pause 2015-11-13
kubernetes/php-redis 2015-11-14
kubernetes/phpmyadmin 2015-11-13
kubernetes/redis-proxy 2015-11-14
kubernetes/redis-slave 2015-11-13
kubernetes/redis 2015-11-14
kubernetes/serve_hostname 2015-11-13
kubernetes/skydns 2015-11-13
kubernetes/sleepy 2015-11-14
kubernetes/test-webserver 2015-11-14
kubernetes/update-demo 2015-11-14
kubernetes/zookeeper 2015-11-14

(source: https://gist.github.com/spiffxp/26a14e1fe284814bbf290d7b47fb7e0c)

If this turns out to have badly broken someone's workflow, please contact our mailing list kubernetes-wg-k8s-infra@ or slack channel #wg-k8s-infra, and we will work to resolve the issue.

If we hear nothing within a week, we will retire https://hub.docker.com/u/kubernetes. It will remain owned by the project to ensure nobody else snags it, but we won't be using it.

/wg k8s-infra
/kind cleanup
/priority important-soon

GCS Bucket for sig-release prototype for apt and rpm repos

Some @kubernetes/sig-release members are looking at prototyping a new repository layout for k/k release artifacts. As part of this effort https://github.com/kubernetes/kubernetes/issues/76738

Can we please create a CNCF owned temporary GCS bucket for this experiment? They would try a job that pushes daily artifacts to start with once they figure out the directory tree layout. This would not be permanent until the KEP is baked (at which point we can remove it and start fresh, would also script everything along the way so we can start fresh in a new bucket)

cc @tpepper

Per k8s namespace billing information

As we grow the infrastructure to include more projects, namespaces are the logical point for bucketing our infra spend. We need to research / document / implement per-namespace billing information.

Bringing under #156.

@thockin did you have information on this?

Take inventory of Google-owned GCP projects for k8s-infra

Google has GCP projects for k8s-infra that hold more than just Kubernetes clusters and the pods/services therein. To better identify which services need to be migrated over, let's inventory which projects are currently used to run k8s-infra (or tests for k8s) and the services that they use

eg:

  • k8s-prow
  • k8s-prow-builds
  • k8s-mungegithub
  • k8s-gubernator
  • kubernetes-jenkins
  • kubernetes-jenkins-pull
  • kubernetes-site
  • (the sundry projects used by boskos)
  • etc.

Setup a job to automatically run and PR the results of audit/audit-gcp.sh

/assign @hh
/wg k8s-infra

EDIT: redoing description entirely, things have changed since this issue was created

We want a prowjob that does the following:

  • runs on k8s-infra-prow-build-trusted
  • uses the k8s-infra-gcp-auditor GCP service account via workload identity
  • runs the audit/audit-gcp.sh script
  • if there are changes, opens or updates an existing PR to k8s.io (similar to how prow opens/updates autobump PRs)
  • (optionally) fails if the existing PR has been open for too long

I am super open to suggestions about whether there are better or more-actionable ways to do auditing. But we need to start with something.

I sketched out what such a job would look like here: https://github.com/bashfire/prow-config/blob/435a8039bc9cf496690ad572884a72e9608ebb4e/config/jobs/bashfire/k8s-io.yaml
This is one of the first things we want to setup on a freshly created k8s-infra cluster to be sure we actually have the cluster and all of the IAM policies / roles created properly

First run as Job, next run as CronJob

[Umbrella Issue] Build GKE Cluster for running bots, utilities

  • We need one "alpha" cluster that is time-boxed (30 days?)
  • Same cluster will be used for both jobs
  • New Google Group for admins : k8s-infra-cluster-admins
  • Group will have "GKE Admin" access (described in [1])
  • Google Group seeded with
    • dims
    • justinsb
  • Job owners will be temporarily given "GKE Developer" access
  • Once the Job is running, Job owners will be switched over to "GKE Viewer" access
  • Jobs must have images, yamls, scripts in public repositories.
  • Job owners must document how to run the jobs
  • Job logs must be made public (somehow)
  • We will store info about each job in kubernetes/k8s.io repository for easy discovery
  • Eventually, we should have enough scripts to start a new cluster with all the jobs that we need

[1] https://cloud.google.com/iam/docs/understanding-roles#kubernetes_engine_roles

[Umbrella] Implement self-service DNS updates

Followup from #151

At the moment, updating DNS still requires humans to run commands. The ideal state is PR-driven:

  • DNS requests are made via PR
  • OWNERS files dictate who has authority to approve a given DNS request
  • When then PR merges, something deploys the change

https://github.com/kubernetes/k8s.io/tree/master/dns#todo outlines some potential tasks

The model we've used elsewhere is:

  • Build an image and store it somewhere in gcr.io
  • Use prow jobs that run in a "trusted" cluster (has credentials to do things, so don't let untrusted code run in it)
  • Use postsubmits to deploy immediately
  • Use periodics to reconcile state

Which would mean this may be blocked on:

  • community owned gcr.io repo for k8s-infra
  • community owned prow
  • community owned trusted prow cluster

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.