Coder Social home page Coder Social logo

alibaba / hybridnet Goto Github PK

View Code? Open in Web Editor NEW
243.0 10.0 32.0 116.51 MB

Make underlay and overlay network can coexist, communicate, even be transformed purposefully.

License: Apache License 2.0

Makefile 0.27% Go 98.17% Shell 1.56%
cni cni-plugin kubernetes container networking vxlan sdn overlay-network kubernetes-networking vlan

hybridnet's Introduction

Hybridnet

Go Report Card Github All Releases Version Artifact Hub License codecov workflow check workflow build

Hybridnet is an open source container networking solution designed for hybrid clouds, integrated with Kubernetes and used officially by following well-known PaaS platforms

  • ACK Distro of Alibaba Cloud
  • AECP of Alibaba Cloud
  • SOFAStack of Ant Financial Co.

Introduction

Most CNI plugins classify the forms of container network into two types and make them work at their own paces without connection:

  1. Overlay, an abstract data plane on top of host network which is usually not visible to underlying network and brings cost
  2. Underlay, putting container traffic "directly" into host network and depending on the abilities of underlying network

In hybridnet, we try to break the strict boundary of all the forms of container network with a simple design:

  1. Overlay and underlay network can be created in the same cluster
  2. If a connection has either an overlay container side (even the other side is an underlay container), it's considered an "Overlay" connection (without NATed). In other words, underlay containers are always connected with overlay containers directly, just like they are all overlay containers
  3. Traffic between underlay containers keeps the origin "Underlay" attributes. Lower costs and being visible to underlying network

datapath

The users of hybridnet can keep both Overlay and Underlay network inside a Kubernetes cluster without any concern about the connectivity, which brings a more flexible and extensible container network to orchestrate different applications.

As the foundation of hybridnet, we use "Policy Routing" to distribute traffic across the different data planes. The feature of "Policy Routing" is introduced in 2.1 version of linux kernel as a basic part of routing subsystem, which provides strong stability and compatibility. Another two docs about hybridnet components and the contrast between hybridnet and other CNI implementations can be considered as further references.

Features

  • Unified topology-aware management APIs implemented with Kubernetes CRD
  • Support IPv4/IPv6 dual stack
  • Multiple network fabrics. VXLAN(overlay), VLAN(underlay), BGP(underlay), etc.
  • Advanced IPAM. Retaining IP for stateful workloads, topology-aware IP allocation, etc.
  • Good compatibility with other networking components (e.g., kube-proxy, cilium)

How-To-Use

See documents on wiki.

Compile and build

Clone the repository to local and make can build hybridnet images.

Contributing

Hybridnet welcome contributions, including bug reports, feature requests and documentation improvements. If you want to contribute, please start with CONTRIBUTING.md

Contact

For any questions about hybridnet, please reach us via:

  • Slack: #general on the hybridnet slack
  • DingTalk: Group No.35109308
  • E-mail: private or security issues should be reported via e-mail addresses listed in the MAINTAINERS file

License

Apache 2.0 License

hybridnet's People

Contributors

arctan90 avatar gaozhengwei avatar haiyangding avatar hhyasdf avatar louhwz avatar mars1024 avatar oilbeater avatar sjtufl avatar swimiltylers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hybridnet's Issues

enhanced address is being used from node to local pods unexpectedlly

Bug Report

Type: bug report

What happened

image

image

Enhanced address is being used, when we want to ping local pods from node. This will cause the ICMP reply will never come back.

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • hybridnet version: 3.2.0
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

choose preferred host interface by subnet

Issue Description

Type: feature request

Describe what feature you want

Now hybridnet is choosing preferred host interface through two flags.
They are not configurable enough.
We need a new CRD which can be attached to subnets.

Additional context

specify subnet without network

Issue Description

Type: feature request

Describe what feature you want

Specify subnet without network. Only subnet-specify annotation is needed.

Additional context

Just for learnning from Calico?

Bug Report

Type: bug report

What happened

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • hybridnet version:
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

Miss validation for start and end when creating subnet

Bug Report

Type: bug report

What happened

When creating a subnet which start > end, webhook won't deny it, though it's available ip is zero.
image

What you expected to happen

The webhook should deny it directly for this is unvalid.

iptables not correctly configured on CentOS 8 host

Bug Report

Type: bug report

What happened

An compatibility problem might it be.

I set up a k8s cluster with CentOS 8 (which links iptables to nftables) and runs pods on Overlay and Underlay networks. I observed that on the host machines, the cmd lsmod | grep ip_tables shows that ip_tables is used by iptable_nat, iptable_mangle, and iptable_filter.

After checking the logs of daemon pods, I believe the iptables rule is written without error. But on host machine, no rama-related iptables rules shown through either iptables-save or nft rule list.

To be mentioned, iptables-save warns that there are more rules on iptables-legacy.

What you expected to happen

I should observe rama-related rules in iptables-save.

How to reproduce it (as minimally and precisely as possible)

Set up a k8s cluster CentOS 8 nodes. Install rama and run several Overlay/Underlay pods.

Anything else we need to know?

CentOS 8 removes iptables from packages and links it to nftables.

Kube-proxy works perfectly.

Environment

  • rama version: v1
  • OS (e.g. cat /etc/os-release): CentOS 8
  • Kernel (e.g. uname -a): Linux 4.18.0-305.7.1.el8_4.x86_6
  • Kubernetes version: v1.21.

most pods can still be created even if webhook inactive

Issue Description

Type: feature request

Describe what feature you want

Now, if webhook is down, all pods will be blocked from creation. But actually, most pods can be handled by controller even webhook inactive.

Additional context

IPInstance not released if pod is Completed or Evicted

Bug Report

Type: bug report

What happened

If a pod is evicted or is a completed job pod, it's ipinstance will not be released until pod is deleted manually. Maybe is better to release such ipinstances it by hybridnet.

What you expected to happen

IPInstance can be released if a pod is "Completed" or "Evicted".

How to reproduce it (as minimally and precisely as possible)

Run a Job or make a pod evicted can produce it.

Anything else we need to know?

Environment

  • hybridnet version: v0.2.1
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

policy container exit with ip6tables-legacy-save error

Bug Report

Type: bug report

What happened

Policy container of daemon pod exit with ip6tables-legacy-save error:

2022-05-10 08:11:59.242 [WARNING][24856] felix/table.go 763: ip6tables-legacy-save command failed error=exit status 1 ipVersion=0x6 stderr="" table="raw"
2022-05-10 08:11:59.249 [WARNING][24856] felix/table.go 814: iptables save failed error=exit status 1
2022-05-10 08:11:59.249 [WARNING][24856] felix/table.go 763: ip6tables-legacy-save command failed error=exit status 1 ipVersion=0x6 stderr="" table="nat"
2022-05-10 08:11:59.249 [WARNING][24856] felix/table.go 814: iptables save failed error=exit status 1
2022-05-10 08:11:59.249 [WARNING][24856] felix/table.go 763: ip6tables-legacy-save command failed error=exit status 1 ipVersion=0x6 stderr="" table="filter"
2022-05-10 08:11:59.444 [WARNING][24856] felix/table.go 814: iptables save failed error=exit status 1
2022-05-10 08:11:59.444 [WARNING][24856] felix/table.go 763: ip6tables-legacy-save command failed error=exit status 1 ipVersion=0x6 stderr="" table="raw"
2022-05-10 08:11:59.444 [WARNING][24856] felix/table.go 814: iptables save failed error=exit status 1
2022-05-10 08:11:59.444 [WARNING][24856] felix/table.go 763: ip6tables-legacy-save command failed error=exit status 1 ipVersion=0x6 stderr="" table="mangle"
2022-05-10 08:11:59.451 [WARNING][24856] felix/table.go 814: iptables save failed error=exit status 1
2022-05-10 08:11:59.451 [WARNING][24856] felix/table.go 763: ip6tables-legacy-save command failed error=exit status 1 ipVersion=0x6 stderr="" table="nat"
2022-05-10 08:11:59.451 [WARNING][24856] felix/table.go 814: iptables save failed error=exit status 1
2022-05-10 08:11:59.451 [WARNING][24856] felix/table.go 763: ip6tables-legacy-save command failed error=exit status 1 ipVersion=0x6 stderr="" table="filter"
2022-05-10 08:11:59.846 [WARNING][24856] felix/table.go 814: iptables save failed error=exit status 1
2022-05-10 08:11:59.846 [WARNING][24856] felix/table.go 763: ip6tables-legacy-save command failed error=exit status 1 ipVersion=0x6 stderr="" table="mangle"
2022-05-10 08:11:59.846 [PANIC][24856] felix/table.go 769: ip6tables-legacy-save command failed after retries ipVersion=0x6 table="mangle"
panic: (*logrus.Entry) 0xc000992640

goroutine 205 [running]:
github.com/sirupsen/logrus.Entry.log(0xc00007e180, 0xc000269bc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7f6900000000, ...)
	/go/pkg/mod/github.com/projectcalico/[email protected]/entry.go:112 +0x2ff
github.com/sirupsen/logrus.(*Entry).Panic(0xc0005c3310, 0xc00091eb38, 0x1, 0x1)
	/go/pkg/mod/github.com/projectcalico/[email protected]/entry.go:182 +0xfa
github.com/sirupsen/logrus.(*Entry).Panicf(0xc0005c3310, 0x27b6396, 0x1f, 0xc00091ebf8, 0x1, 0x1)
	/go/pkg/mod/github.com/projectcalico/[email protected]/entry.go:230 +0xc5
github.com/projectcalico/felix/iptables.(*Table).getHashesAndRulesFromDataplane(0xc0008af000, 0xa3d43fb, 0x3caaf60)
	/go/src/github.com/projectcalico/felix/iptables/table.go:769 +0x4bc
github.com/projectcalico/felix/iptables.(*Table).loadDataplaneState(0xc0008af000)
	/go/src/github.com/projectcalico/felix/iptables/table.go:606 +0x1e5
github.com/projectcalico/felix/iptables.(*Table).Apply(0xc0008af000, 0xc0007d1fb0)
	/go/src/github.com/projectcalico/felix/iptables/table.go:990 +0x1025
github.com/projectcalico/felix/dataplane/linux.(*InternalDataplane).apply.func3(0xc000967358, 0xc000967360, 0xc0006e4c00, 0xc000967370, 0xc0008af000)
	/go/src/github.com/projectcalico/felix/dataplane/linux/int_dataplane.go:1818 +0x3c
created by github.com/projectcalico/felix/dataplane/linux.(*InternalDataplane).apply
	/go/src/github.com/projectcalico/felix/dataplane/linux/int_dataplane.go:1817 +0x6de

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • hybridnet version: v0.4.2
  • OS (e.g. cat /etc/os-release): CentOS Linux 7
  • Kernel (e.g. uname -a): Linux iZ0jlbn6dzzahicroo1vwhZ 3.10.0-957.21.3.el7.x86_64
  • Kubernetes version: 1.20
  • Install tools: ack-distro
  • Others:

Only IPv4 feature valid if dualstack feature-gate is false

Issue Description

Type: feature request

Describe what feature you want

For now, if dualstack feature-gate is false, user can still create IPv6-only environment, which make some behaviors unpredictable. We should limit that only IPv4 environment can be created if feature-gate is false.

Additional context

  1. Reuse the quota label for ipv4-only and dualstack environment.
  2. Reuse the statistic fields for ipv4-only and dualstack environment.

Error "file exists" happens while creating pod

Bug Report

Type: bug report

What happened

Pod keep being "ContainerCreating" for a long time. And "failed to set link to host netns: file exists" error exist in the output of kubectl describe po.

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

It happens randomly.

Anything else we need to know?

Environment

  • hybridnet version: v0.4.0
  • OS (e.g. cat /etc/os-release): CentOS 8
  • Kernel (e.g. uname -a):
  • Kubernetes version: 1.20.4
  • Install tools:
  • Others:

Nodes get "empty" quota while the Network still have available addresses to allocate

Bug Report

Type: bug report

What happened

An Underlay Network (still with enough ip addresses to allocate) has 17 nodes, 16 of which have the "networking.alibaba.com/address-quota: empty" label while one of them has "networking.alibaba.com/address-quota: noempty" label.

The "noempty" one is the last item of the network.status.nodeList.

What you expected to happen

All of the nodes have the "networking.alibaba.com/address-quota: noempty" label.

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • hybridnet version: v0.4.2
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version: 1.20
  • Install tools:
  • Others:

specify network-type/network/subnet for a namespace

Issue Description

Type: feature request

Describe what feature you want

Now it's supported in Hybridnet to specify network-type/network/subnet for a pod. But maybe it's necessary to support specify network-type/network/subnet for a batch of pods which are in the same namespace.

Additional context

Separate IPv4 and IPv6 storage in network

Issue Description

Type: feature request

Describe what feature you want

If one network have several subnets, some IPv4 and some IPv6, because the dualstack allocation will take a loop to find available IPv4 subnet and IPv6 subnet, so the subnet index (last allocated subnet) will increase fast and not continuous.

Additional context

stricter address range validation on subnet change

Issue Description

Type: feature request

Describe what feature you want

Now subnet change will hook a address range validation, which is not precise enough,
ReservedIPs and ExlucdedIPs should be taken into account, too.

Additional context

NONE

Bad path from overlay pod to gateway of underlay subnets

Bug Report

Type: bug report

What happened

Bad network path from overlay pod to gateway of underlay subnets, when ping underlay gateway from overlay pod, unexpected neighbor event comes from vxlan interface.

What you expected to happen

We expect overlay pod will react underlay gateway on a underlay+snat network path.

How to reproduce it (as minimally and precisely as possible)

Ping underlay gateway from overlay pod.

Anything else we need to know?

Beside underlay gateway, if we only use part of a underlay subnet, the rest IPs will have the same problem as gateway.

Environment

  • rama version:
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

Support IP multicast

Issue Description

Type: feature request

Describe what feature you want

  1. Support IP multicast in overlay network
  2. Support IP multicast between underlay network pod and underlying network

Additional context

Overlay pod using a masquerade path to access underlay pod

Bug Report

Type: bug report

What happened

An overlay pod access an underlay pod using a masquerade path rather an vxlan tunnel, when the underlay pod's Network does not include the node which the overlay pod is on.

What you expected to happen

Overlay pods need to access underlay pod using vxlan tunnel without masquerade.

How to reproduce it (as minimally and precisely as possible)

Create two underlay Network and an overlay Network can reproduce it.

Anything else we need to know?

Environment

  • hybridnet version: v0.2.0
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

Optimization of IP allocation performance in large-scale clusters

Issue Description

Type: feature request

Describe what feature you want

By reducing the number of single direct request to apiserver (without cache) in manager's pod reconciling, we can get a significant increase of IP allocation performance.

For now, four requests exist during one pod reconciling process. It's very possible to reduce to one, which can bring a four times theoretical optimization to IP allocation.

Additional context

What we might need mostly:

  1. Remove the IP allocated annotation on Pod, and get the latest Pod allocation info in local cache.
  2. Redesign the IPInstance CR to remove the proccess of updating of status from IP allocation.

Support concurrency in RemoeteClusterStatusChecker

Issue Description

Type: feature request

Describe what feature you want

Now RemoteClusterStatusChecker is a custom runnable, which means it do not support concurrency, just like the other reconcilers.
This pr is trying to support concurrency in RemoteClusterStatusChecker.

Additional context

use controller-runtime instead of code-generator

Issue Description

Type: feature request

Describe what feature you want

Code-generator is too old to maintain, and make something difficult to introduce new objects.
Controller-runtime is the now and future.

Additional context

NONE

iterator value can not be used in closure function directly

Bug Report

Type: bug report

What happened

iterator value can not be used in closure function directly

What you expected to happen

every closure function has its own input value

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • hybridnet version: v0.4.2
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

Always get an error specifying subnets for dualstack type pod

Bug Report

Type: bug report

What happened

For a ipv4/ipv6 dualstack pod, cannot specify subnets for it using the "networking.alibaba.com/specified-subnet"="<v4subnet>/<v6subnet>" annotation because of a "specified subnet not found" error from webhook:

image

Seems the webhook just regards the two subnets as one.

What you expected to happen

Pod can be created successfully.

How to reproduce it (as minimally and precisely as possible)

Like above.

Anything else we need to know?

Environment

  • hybridnet version: 0.3.2
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

Not sure what they want to do. IPVLAN or MACVLAN

Bug Report

cilium/cilium#14436
this is based on IPVLAN. but from this repo, it's based on MACVLAN. not sure what they want to do.
Type: bug report

What happened

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • hybridnet version:
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

Support BGP underlay network

Issue Description

Type: feature request

Describe what feature you want

BGP is now always the most popular choice for a large-scale cluster. Using BGP can provide an almost hands-free way to initialize container network, which is also much more convenient for maintenance.

Additional context

Multi-cluster network connection

Issue Description

Type: feature request

Describe what feature you want

Connecting multi-cluster network. There may be underlay network and overlay network outside the cluster, treat them as cr in local cluster.

Additional context

The architecture is shown below:
image

Context pass on in chain

Issue Description

Type: feature request

Describe what feature you want

In current hybridnet, context is not being passed on in chain, so usually a controller-runtime client is using a context.TODO() or context.Background() instead. This is non-standard. We should keep context awareness everywhere.
After that, if manager is shutting down, all client-related goroutines will get a context signal and quit immediately.

Additional context

test webhook

It's just a test for dingding webhook
Issue Description

Type: feature request

Describe what feature you want

Additional context

hybrid-daemon policy container reporting "can not access kubernetes service, exiting"

Bug Report

Type: bug report

What happened

hybrid-daemon policy container reporting "can not access kubernetes service, exiting" error

What you expected to happen

hybrid-daemon running normally

How to reproduce it (as minimally and precisely as possible)

I just ran the command:

helm install hybridnet hybridnet/hybridnet -n kube-system

and the error occurred

Anything else we need to know?

Environment

  • hybridnet version: hybridnet:v0.4.2
  • OS (e.g. cat /etc/os-release): Ubuntu20.04 LTS
  • Kernel (e.g. uname -a):5.4.0-105-generic
  • Kubernetes version:
    kubenetes version:
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
  • Install tools: helm
  • Others:

Creating /32 or /128 Subnet should not be allowed

Issue Description

Type: feature request

Describe what feature you want

Creating /32 or /128 Subnet should not be allowed in webhook validation, which might cause some unexpected errors.

Additional context

Unexpected route decision after accessing nodeport across nodes

Bug Report

Type: bug report

What happened

node1: 11.166.83.9
node2: 11.166.83.20
svc: nodePort 30991, only have one pod backend, pod ip: 100.88.253.95 on node2
network: vlanid 701
master interface: bond0

test network access: from node1 to node2:30991

#curl 11.166.83.20:30991
404 page not found

tcpdump on interface bond0.701 and got unexpected traffic

#tcpdump -nv -i bond0.701 host 11.166.83.9 and host 11.166.83.20 and port 30991
tcpdump: listening on bond0.701, link-type EN10MB (Ethernet), capture size 262144 bytes
16:09:03.511036 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    11.166.83.20.30991 > 11.166.83.9.37957: Flags [S.], cksum 0xbd97 (incorrect -> 0xcde4), seq 2388492280, ack 4182406673, win 28960, options [mss 1460,sackOK,TS val 522611790 ecr 2192028254,nop,wscale 7], length 0
16:09:03.511207 IP (tos 0x0, ttl 63, id 5047, offset 0, flags [DF], proto TCP (6), length 52)
    11.166.83.20.30991 > 11.166.83.9.37957: Flags [.], cksum 0xbd8f (incorrect -> 0x6c9c), ack 83, win 227, options [nop,nop,TS val 522611790 ecr 2192028254], length 0
16:09:03.511374 IP (tos 0x0, ttl 63, id 5048, offset 0, flags [DF], proto TCP (6), length 228)
    11.166.83.20.30991 > 11.166.83.9.37957: Flags [P.], cksum 0xbe3f (incorrect -> 0x4865), seq 1:177, ack 83, win 227, options [nop,nop,TS val 522611790 ecr 2192028254], length 176
16:09:03.511649 IP (tos 0x0, ttl 63, id 5049, offset 0, flags [DF], proto TCP (6), length 52)
    11.166.83.20.30991 > 11.166.83.9.37957: Flags [F.], cksum 0xbd8f (incorrect -> 0x6be9), seq 177, ack 84, win 227, options [nop,nop,TS val 522611790 ecr 2192028255], length 0

We can see the reply network packets above, all went into bond0.701 after routing.
But, the packets between nodes are expected to go through bond0.

What you expected to happen

All reply network packets go through bond0, but not bond0.701.

How to reproduce it (as minimally and precisely as possible)

Access nodeport across nodes on underlay network mode.

Anything else we need to know?

Sometimes, although the network traffic is unexpected but it works, because container network router received these network packets and forward to nodes, if rp_filter and source address detection are both disabled.

Environment

  • hybridnet version: v0.3.2
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others: The Kubernetes svc implementation is with kube-proxy.

Multi-tenancy

Issue Description

Type: feature request

Describe what feature you want

Multi-tenancy is a common topic in kubernetes, as we know, some container networking solutions may have related abilities on this.
If this done, the following advantages will reach,

  • Container networking isolation between tenants
  • Allow IPAM conflict between tenants

Additional context

Remove DualStack feature gate and make it a build-in behavior

Issue Description

Type: feature request

Describe what feature you want

Now, to deploy a DualStack hybridnet cluster, we need to open the DualStack feature gate by manager/webhook parameters.

But it seems now that "an ipv4-only cluster with closed DualStack feature gate" and "a dual-stack cluster (DualStack feature gate is opened) with only ipv4 subnets" are not different at all. Maybe it's time to remove the DualStack feature gate and make it a build-in feature.

Additional context

invalid memory address or nil pointer dereference

Bug Report

Type: bug report

What happened

Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

image

Anything else we need to know?

Environment

  • hybridnet version:
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Kubernetes version:
  • Install tools:
  • Others:

Name of vxlan parent interface's length should shorter than 8 characters

Issue Description

Type: feature request

Describe what feature you want

This is not actually an issue or a bug.

Because hybridnet use ".vxlan4" as the fixed suffix of generated vxlan interface. We should announce the requirement clearly that the name of vxlan parent interface's length should shorter than 8 characters.

Additional context

Global BGP network support

Issue Description

Type: feature request

Describe what feature you want

Now we need to create one BGP mode hybridnet Network for each ToR. It can not satisfy the Pods which need both static IP addresses and cluster-wide scheduled ability.

Additional context

It's possible to achieve it for BGP protocol, all we need to do is to change the way of advertising routes.

use cert-manager for deployment automation

Issue Description

Type: feature request

Describe what feature you want

Now rama webhook is taking whole control of webhook configurations, including creation or update.
But we have to generate and config all certificates(root CA, tls cert, tls key) through a secret, that would bring some troubles to users, and it's not a general k8s-style way.
So cert-manager should be introduced for deployment automation.

Additional context

cert-manager will be a necessary pre step before installing rama.

support more configurations on dualstack mode

Issue Description

Type: feature request

Describe what feature you want

If using dualstack mode on hybridnet, some fixed default configurations may become inconvenient for users.
We should figure out all of them, and make them configurable.

  • default ip-family assignment on dualstack mode

Additional context

use --prefer-interfaces to assign an interface list failed

What happened:

There are two types of nodes, some of them use eth1 as node container networking nic, some of them use eth0 as node container networking nic

I wish to use configuration of "--prefer-interfaces=eth0,eth1" with only one rama-daemon ds to support all of the nodes, but some thing failed, pod is stuck at Containercreating period with ensure vlan interface error: Link not found

What you expected to happen:

Pod can be created correctly on all of the nodes.

How to reproduce it (as minimally and precisely as possible):

Using configuration of "--prefer-interfaces=eth0,eth1" to deploy a rama-daemon ds, and need an environment of different nic name on different nodes

Anything else we need to know?:

NONE

Environment:

  • rama version: unreleased version
  • OS (e.g: cat /etc/os-release): CentOS
  • Kernel (e.g. uname -a): 4.19
  • Kubernetes version: 1.16
  • Install tools: trident
  • Others:

Using charts for user-friendly deployment

Issue Description

Type: feature request

Describe what feature you want

Use charts to deploy hybridnet, including

  • versioning
  • custom parameters
  • one-command installation

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.