Coder Social home page Coder Social logo

docs's Issues

SecurityGroup documentation error

按照SecurityGroup配置文档测试安全组发现和预期效果不一致

安全组配置如下:放行网关192.50.0.1以及禁止192.50.0.3

apiVersion: kubeovn.io/v1
kind: SecurityGroup
metadata:
  name: sg-gw-both
spec:
  allowSameGroupTraffic: true
  egressRules:
  - ipVersion: ipv4
    policy: deny
    priority: 2
    protocol: all
    remoteAddress: 192.50.0.3/24
    remoteType: address
  - ipVersion: ipv4
    policy: allow
    priority: 1
    protocol: all
    remoteAddress: 192.50.0.1/24
    remoteType: address
  ingressRules:
  - ipVersion: ipv4
    policy: deny
    priority: 2
    protocol: icmp
    remoteAddress: 192.50.0.3/24
    remoteType: address
  - ipVersion: ipv4
    policy: allow
    priority: 1
    protocol: all
    remoteAddress: 192.50.0.1/24
    remoteType: address

实际情况发现192.50.0.3并未被禁止,查看 ACL 规则发现规则被正常下发:

$ kubectl ko nbctl acl-list ovn.sg.sg.gw.both
from-lport  2299 (inport == @ovn.sg.sg.gw.both && ip4 && ip4.dst == 192.50.0.1/24) allow-related
from-lport  2298 (inport == @ovn.sg.sg.gw.both && ip4 && ip4.dst == 192.50.0.3/24) drop
from-lport  2004 (inport == @ovn.sg.sg.gw.both && ip4 && ip4.dst == $ovn.sg.sg.gw.both.associated.v4) allow-related
from-lport  2004 (inport == @ovn.sg.sg.gw.both && ip6 && ip6.dst == $ovn.sg.sg.gw.both.associated.v6) allow-related
  to-lport  2299 (outport == @ovn.sg.sg.gw.both && ip4 && ip4.src == 192.50.0.1/24) allow-related
  to-lport  2298 (outport == @ovn.sg.sg.gw.both && ip4 && ip4.src == 192.50.0.3/24 && icmp4) drop
  to-lport  2005 (inport == @ovn.sg.sg.gw.both && arp) allow-related
  to-lport  2005 (inport == @ovn.sg.sg.gw.both && icmp6.type == {130, 133, 135, 136} && icmp6.code == 0 && ip.ttl == 255) allow-related
  to-lport  2005 (inport == @ovn.sg.sg.gw.both && ip.proto == 112) allow-related
  to-lport  2005 (inport == @ovn.sg.sg.gw.both && udp.src == 546 && udp.dst == 547 && ip6) allow-related
  to-lport  2005 (inport == @ovn.sg.sg.gw.both && udp.src == 68 && udp.dst == 67 && ip4) allow-related
  to-lport  2005 (outport == @ovn.sg.sg.gw.both && arp) allow-related
  to-lport  2005 (outport == @ovn.sg.sg.gw.both && icmp6.type == {130, 134, 135, 136} && icmp6.code == 0 && ip.ttl == 255) allow-related
  to-lport  2005 (outport == @ovn.sg.sg.gw.both && ip.proto == 112) allow-related
  to-lport  2005 (outport == @ovn.sg.sg.gw.both && udp.src == 547 && udp.dst == 546 && ip6) allow-related
  to-lport  2005 (outport == @ovn.sg.sg.gw.both && udp.src == 67 && udp.dst == 68 && ip4) allow-related
  to-lport  2004 (outport == @ovn.sg.sg.gw.both && ip4 && ip4.src == $ovn.sg.sg.gw.both.associated.v4) allow-related
  to-lport  2004 (outport == @ovn.sg.sg.gw.both && ip6 && ip6.src == $ovn.sg.sg.gw.both.associated.v6) allow-related

通过kubectl ko trace跟踪流表得到如下结果:

# busybox01 是启用安全组的 Pod,其 IP 为 192.50.0.2
kubectl ko trace vpc1/busybox01 192.50.0.3 icmp
+ kubectl exec ovn-central-674d5cb9d-t66rc -n kube-system -c ovn-central -- ovn-trace vpc1-subnet1 'inport == "busybox01.vpc1" && ip.ttl == 64 && icmp && eth.src == 00:00:00:CD:2F:58 && ip4.src == 192.50.0.2 && eth.dst == 00:00:00:AE:3E:16 && ip4.dst == 192.50.0.3 && ct.new'

ingress(dp="vpc1-subnet1", inport="busybox01.vpc1")
---------------------------------------------------
 0. ls_in_check_port_sec (northd.c:8603): 1, priority 50, uuid 6aea41c7
    reg0[15] = check_in_port_sec();
    next;
 4. ls_in_pre_acl (northd.c:5896): ip, priority 100, uuid 0194d5d4
    reg0[0] = 1;
    next;
 6. ls_in_pre_stateful (northd.c:6114): reg0[0] == 1, priority 100, uuid e2b2e041
    ct_next;

ct_next(ct_state=est|trk /* default (use --ct to customize) */)
---------------------------------------------------------------
 7. ls_in_acl_hint (northd.c:6202): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid c004fd96
    reg0[8] = 1;
    reg0[10] = 1;
    next;
 8. ls_in_acl (northd.c:6444): reg0[8] == 1 && (inport == @ovn.sg.sg.gw.both && ip4 && ip4.dst == 192.50.0.1/24), priority 3299, uuid f036168e
    next;

-----------------------------------------------------------------

部分逻辑流表如上所示,查看table 8 中的 ls_in_acl 发现:

 8. ls_in_acl (northd.c:6444): reg0[8] == 1 && (inport == @ovn.sg.sg.gw.both && ip4 && ip4.dst == 192.50.0.1/24), priority 3299, uuid f036168e
    next;

192.50.0.3 并未匹配 192.50.0.3/24,似乎在下发 ACL 时若携带掩码则被当作网段进行匹配,因此 192.50.0.3/24 的规则不会被匹配中

将安全组中的 remoteAddress 的 IP 掩码去除后,安全组达到预期的效果

Website does not have the correct trademark disclaimer

As part of our ongoing effort to cncf/techdocs#198, we noticed that the website does not pass the trademark criteria on CLOMonitor.

To fix this:
Head to the source code of the website. In the <footer> section, add a disclaimer or link to the Linux foundation trademark disclaimer page:

Disclaimer

<footer>
   <p>The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, 
         please see our <a href="https://www.linuxfoundation.org/legal/trademark-usage">Trademark Usage page</a>.
   </p>
</footer>

Link

 <footer>
      <ul>
          <li><a href="https://www.linuxfoundation.org/legal/trademark-usage">Trademarks</a></li>
      </ul>
 </footer>

内网部署文档库

大佬好,因为我们公司内网没有互联网,所有想把咱们得帮助文档搬到我们内网,我该如何操作呢? 有没有类似k8s的官方文档的website那种的服务, 一下直接搞过去呢?求指导。
参考了https://github.com/kubeovn/docs 里面 .github 下的 workflow目录下的ci.yaml文件,但是没有太好的部署思路,能不能指导下呢?

Please add examples of how to achieve via ACLs the effect that would be equivalent to combining ` natOutgoing: true` + `private: true`

Issue requested by kubeovn/kube-ovn#3408 (comment).

Motivation

In kubeovn/kube-ovn#3408 I noticed that it is not currently possible to combine natOutgoing: true + private: true.

The effect that I would like to achieve is internal isolation between the subnets, while also allowing the pods to access addresses on the internet (e.g. for downloading datasets) via NAT-ing (so that external internet addresses cannot initiate any connection with a pod inside the cluster).

Constraints

I don't know beforehand which CIDRs the pods need to access/not to access.

Basically the pods should be able to access the whole "external world"/internet, and I don't have a predefined list of all CIDRs inside the cluster (new subnets are created and deleted dynamically all the time).

Documentation Request

One of the OVN contributors suggested in kubeovn/kube-ovn#3408 (comment) that it is possible to achieve that via ACLs. However I find that it is very hard to figure that out by myself, and I imagine that other people might be struggling with that too.

It would be nice if the docs contain examples of how to achieve this by manipulating the ACLs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.