Coder Social home page Coder Social logo

kubernetes-sigs / mcs-api Goto Github PK

View Code? Open in Web Editor NEW
197.0 19.0 40.0 294 KB

This repository hosts the Multi-Cluster Service APIs. Providers can import packages in this repo to ensure their multi-cluster service controller implementations will be compatible with MCS data planes.

License: Apache License 2.0

Dockerfile 0.67% Makefile 2.93% Go 70.28% Python 7.20% Shell 17.85% Starlark 1.07%
k8s-sig-multicluster

mcs-api's Introduction

Multi-cluster Service APIs

This repository hosts the Multi-Cluster Service APIs. Providers can import packages in this repo to ensure their multi-cluster service controller implementations will be compatible with MCS data planes.

This repo contains the initial implementation according to KEP-1645 and will be used for iterative development as we work to meet our Alpha -> Beta graduation requirements.

Try it out

Requires kind

To see the API in action, run make demo to build and run a local demo against a pair of kind clusters. Alternatively, you can take a self guided tour. Use:

  • ./scripts/up.sh to create a pair of clusters with mutually connected networks and install the mcs-api-controller.

    This will use a pre-existing controller image if available, it's recommended to run make docker-build first.

  • ./demo/demo.sh to run the same demo as above against your newly created clusters (must run ./scripts/up.sh first).

  • ./scripts/down.sh to tear down your clusters.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Our meeting schedule is here

Technical Leads

  • @pmorie
  • @jeremyot

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

mcs-api's People

Contributors

andrewsykim avatar jeremyot avatar jmhbnz avatar k8s-ci-robot avatar knee-berts avatar lauralorenz avatar liorlieberman avatar nikhita avatar nojnhuh avatar rainbowmango avatar sbdtu5498 avatar skitt avatar tpantelis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mcs-api's Issues

make demo: undeclared name: ServiceExport

Tried this on master and v0.1.0 tag. Both times make demo gave me:

$ make demo
./hack/update-codegen.sh
go: downloading k8s.io/gengo v0.0.0-20200114144118-36b2048a9120
Generating deepcopy funcs
Generating clientset at sigs.k8s.io/mcs-api/pkg/client/clientset
Generating listers at sigs.k8s.io/mcs-api/pkg/client/listers
Generating informers at sigs.k8s.io/mcs-api/pkg/client/informers
Generating register at sigs.k8s.io/mcs-api/pkg/apis/v1alpha1
go run sigs.k8s.io/controller-tools/cmd/controller-gen object:headerFile=./hack/boilerplate.go.txt paths="./..."
go: downloading k8s.io/apimachinery v0.18.4
go: downloading k8s.io/api v0.18.4
go: downloading k8s.io/utils v0.0.0-20200603063816-c1c6865ac451
go: downloading golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7
go fmt ./...
go vet ./...
go: downloading github.com/onsi/gomega v1.10.1
go: downloading github.com/onsi/ginkgo v1.14.0
# sigs.k8s.io/mcs-api/sigs.k8s.io/mcs-api/pkg/apis/v1alpha1
vet: sigs.k8s.io/mcs-api/pkg/apis/v1alpha1/zz_generated.register.go:61:4: undeclared name: ServiceExport
Makefile:57: recipe for target 'vet' failed
make: *** [vet] Error 2

Support alternate GVR in ServiceExport and ServiceImport in e2e tests

Vendors today may be using a mirror of the ServiceExport and ServiceImport APIs and today's e2e tests make it impossible for them to run them as conformance tests because the mcsclient is ultimately configured to use the v1alpha1 API whose GVR (group version resource) is hardcoded as multicluster.x-k8s.io/v1alpha.

As discussed in SIG-MC today 11/15/2022 we'd like to accommodate vendors who may, due to their own API deprecation policy or during active dev, use a mirror of MCS until upstream is GA (GKE being one of these).

Two ways to potentially address this:

  1. tests use a common struct and convert
  2. parameterize the GVR

Ultimately the schema of the resources should still be the same for conformance.

Here's an example against GKE clusters of the e2e tests failing on this point:

 Unexpected error:
      <*errors.StatusError | 0xc000134640>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {
                  SelfLink: "",
                  ResourceVersion: "",
                  Continue: "",
                  RemainingItemCount: nil,
              },
              Status: "Failure",
              Message: "the server could not find the requested resource (post serviceimports.multicluster.x-k8s.io)",
              Reason: "NotFound",
              Details: {
                  Name: "",
                  Group: "multicluster.x-k8s.io",
                  Kind: "serviceimports",
                  UID: "",
                  Causes: [
                      {
                          Type: "UnexpectedServerResponse",
                          Message: "404 page not found",
                          Field: "",
                      },
                  ],
                  RetryAfterSeconds: 0,
              },
              Code: 404,
          },
      }
      the server could not find the requested resource (post serviceimports.multicluster.x-k8s.io)
  occurred
------------------------------

Update Test 1 / "connectivity_test" to confirm expected endpoints

Per the KEP, we must test that

Test cluster A can contact service imported from cluster B and route to expected endpoints.

Today, the connectivity_test (which I also refer to as Test 1) confirms the first part ("contact") but does not confirm the second part ("route to expected endpoints"). It does this by confirming that something responded to a request for the Service IP, and that its response contains some text it expects (reference).

This issue is done when this test also gathers the expected pod IPs for the pods in the deployment backing the Service in the source cluster B first*, and then confirming during the connectivity test that it is those IPs and only those IPs that are responding to the Service IP request.

*This should probably be done by collecting the source cluster's Pod.Spec.IPs through the k8s API, and probably be implemented in a separate method that can be used by Test 3 later.

Alpha --> Beta Graduation

From KEP 1645, here are the outlined requirements for graduation from Alpha to Beta for MCS:

  • A detailed DNS spec for multi-cluster services.
  • NetworkPolicy either solved or explicitly ruled out.
  • API group chosen and approved.
  • E2E tests exist for MCS services.
  • Beta -> GA Graduation criteria defined.
  • At least one MCS DNS implementation.
  • A formal plan for a standard Cluster ID.
  • Finalize a name for the "supercluster" concept.
  • Cluster ID KEP is in beta

Reference:

This issue proposes that we track the progress of the above steps (or modify them as appropriate) and make this a canonical reference for graduation progess.

General feedback/questions

I'm not familiar with the standard procedure around K8s proposals and the implementation process. Is there a place where k8s looks for community feedback? Or a way to comment/ask questions around designs?

I've read kep 1645, and have some general questions I'd like to get answered. In lieu of a better place to put these questions I'll add them here, but feel free to close this if there's a better place:

  1. The implementation of the MCS is not well defined in the spec. Including how it learns about the endpoints/endpointslices from non-local clusters that are part of the clusterset. How does the MCS authenticate, or what API's is it hitting specifically? Are there new API's or does it use the standard K8s API's?
  2. Since the interop API's are not defined, is it expected that 2 different K8s implementations can be part of the same clusterset?
  3. Can you elaborate how networking works for Cluster IP's with internal IP addresses on non-network connected K8s clusters?

My guess for #3 is that it's not supported, but I'd also guess that this is one of the biggest use cases (or at least would be if it was supported). I'm surprised to see this proposal not include an ingress option for these services, or at least a way to configure it.

Cheers!

ServiceExport Client should support operations across multiple namespaces

Current Behavior:
The serviceexport client has ns as a private field in the struct (https://github.com/kubernetes-sigs/mcs-api/blob/master/pkg/client/clientset/versioned/typed/apis/v1alpha1/serviceexport.go#L55), and uses that as a hardcoded namespace value in all operations. This makes it difficult to use the client for an entire cluster, to create ServiceExport resources for Services in arbitrary namespaces.

Expected Behavior:
I can create a single client, and provide the namespace at call-time to specify CRUD operations to occur in/for a specific namespace

As a workaround, I have been directly using the RESTClient, for example:

func (sc *ServiceExportController) createServiceExport(service *v1.Service) (v1alpha1.ServiceExport, error) {
	serviceExport := v1alpha1.ServiceExport{}
	serviceExport.Namespace = service.Namespace
	serviceExport.Name = service.Name

	result := &v1alpha1.ServiceExport{}
	err := sc.client.Post().
		Namespace(service.Namespace).
		Resource("serviceexports").
		VersionedParams(&metav1.CreateOptions{}, scheme.ParameterCodec).
		Body(serviceExport).
		Do(context.TODO()).
		Into(result)

	return err
}

But that is unwieldy in the code, and makes unit testing more difficult.

Update e2e tests to support k8s 1.21+ only, particularly to fix EndpointSlice version errors when using them today

We happen to be observing a breakage in our e2e tests against clusters running above Kubernetes version 1.21. This is because the MCS API is dependent on EndpointSlices and the e2e tests happened to have been written when EndpointSlice was in v1beta1 and implemented as such, but which GA'd to v1 in Kubernetes 1.21.

Per discussion in SIG-MC today 11/15/2022, e2etests have a lower standard of backwards compatibility than vendor implementations might choose, and we are ok with making them work against only the GA version of EndpointSlice, even in their role as conformance tests. Some of the reasons given were because it's been out for longer than a year, that in the purview of MCS API history its been a stable dependency, etc.

Update e2e tests to conform to KEP obligations to unblock beta

Parent issue tracking work to bring MCS API repo e2e tests up to par with the KEP requirements. For more background, see the e2e test section of the MCS API beta blockers working doc which are based on the Test Plan in the KEP.

Issues tracked for this work (links TBD):

  • Update README
    • that we need namespace permissions
    • that we will make and destroy namespace of “CYZ” form
    • that Cluster provisioning is out of band
    • Limitations on network topologies (due to how IPs are discovered in tests)
  • #15
  • #16
  • Add Test 3

scripts/up.sh failed add routes to clusters

What happened:
I'm trying to run the demo but seems the clusters can not get ready.

What you expected to happen:

# ./scripts/up.sh
...
Connecting cluster networks...
Connecting cluster c1 to c2.kubeconfig
Command line is not complete. Try option "help"

How to reproduce it (as minimally and precisely as possible):

  • run command make docker-build.
  • run command ./scripts/up.sh

Anything else we need to know?:

# kind version
kind v0.9.0 go1.15.2 linux/amd64
# kubectl version
Client Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.3-dirty", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"dirty", BuildDate:"2020-11-02T09:52:18Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • OS (e.g: cat /etc/os-release):
# cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:

Update requirements for trying out the demo

Currently, the only requirement for running the demo is kind. To properly run the demo, pv and Python v2 also need to be installed (which don't come installed by default on my fairly recent Ubunut 20.04 install). Same goes for tmux, if the demo isn't running in a tmux session, an error is shown:

image

Technically, kubectl is also a requirement that doesn't come installed by default.

Intro tutorials for MCS API use cases

Develop some vendor agnostic intro tutorials about the MCS API that can be easily digested as part of a generic documentation push. Keep them implementation agnostic but I think it would be useful to tie them to some common use cases as suggested below.

Ideas:

  • MCS for a single-cluster stateful service serving multiple clusters (single ServiceExport case) (see this walkthrough for some inspo)
  • MCS for a same-named stateless service in clusters in different regions for simple failover (merged ServiceExport case)
  • MCS for a same-named stateless service for blue-green upgrade (see this video for example)

Create Test 2 / "local service not impacted" e2e test

Per the KEP, an e2e test must exist that tests:

Test cluster A local service not impacted by same-name imported service.

We define "not impacted" as:

Not impacted = when you run a Service in cluster A called foo, and you have a ServiceImport for foo, you can still resolve foo.ns.svc.cluster.local and and its IP is different from the ServiceImport IP and when you actually request the local service IPs you get a response back and its the right response (so we have to make the response smart, such as reporting their clusterID)
foo.ns.svc.clusterset.local will sometimes return clusterB
foo.ns.svc.cluster.local will always (always=50 times in a row) return cluster A

Given a setup with two clusters, this test should do the following:

  • Service is in ClusterA, NOT exported
    • can hit it with cluster.local
  • Service is in ClusterB and exported
    • can hit it with clusterset.local, always reports ClusterB
  • Also Export the Service in ClusterA
    • can hit it with clusterset.local, sometimes will say ClusterB, sometimes will say ClusterA
  • Hit it 50 times, if I get at least one from either we good
    • Can still hit it with cluster.local, will ALWAYS say its from clusterA
    • No other services should be responding
      • Relevant because if the SUT happens to have other clusters participating and they get all up in there, we want to report that

See the existing connectivity test for inspiration on setup and potential request and response pods. Notably in this case the response needs to be smart enough to share what cluster it is in.

Conformance test: ServiceExport v. ServiceImport clarification

The current e2e test’s setup creates a Service, a ServiceImport, and then expects the corresponding EndpointSlice to be created (within a reasonably time frame).

Does this match the specification? One possible interpretation of the spec is that ServiceImports are created as a result of a ServiceExport being created; this is reinforced by the following sentence in the spec:

If all ServiceExport instances are deleted, each ServiceImport will also be deleted from all clusters.

This gives the impression that a ServiceImport isn’t expected if no corresponding ServiceExport exists.

POC new package for formal conformance tests

Derive a POC off of the existing e2e tests (or from scratch) that meet the criteria in the conformance test proposal doc. Tl;dr:

  • uses ginko 2
  • organizes setup and assertions into pre-MCS ("A"), MCS setup ("B") and MCS cleanup ("C") stanzas that are reusable across tests suites
  • provides a config driven way to compose the matrix of scenarios into test suites utilizing stanzas framework above

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.