istio / ztunnel Goto Github PK
View Code? Open in Web Editor NEWThe `ztunnel` component of ambient mesh
License: Apache License 2.0
The `ztunnel` component of ambient mesh
License: Apache License 2.0
Tests should just be isolated ztunnel tests; full Kubernetes e2e tests will be handled by the existing (Go) integ test suite.
a\nb
I tried compiling on mac m1 and no go because of linux only configuration present.
On Ubuntu I had to install librust-clang-sys-dev
and protobuf-compiler
packages to get things working there.
If you boot up and have no connection to the control
plane, do you forward everything in plain text, or do
you drop all traffic?
Decision: fail-closed only.
Traffic sent from outside ambient to a Pod in ambient should use the Waypoint if available.
There are some cases guarded with Skip in existing tests, but that might not be comprehensive.
Example:
Traffic arrives on the inbound-plaintext port of a uProxy, but the destination is part of an ambient mesh; we should either deny the traffic, or initiate a tunnel.
For this milestone, we likely just want to deny it. Well-behaved traffic should use proper ingress.
Can be cheaply incremented (no locking, etc). Used
primarily for devs to see how many times some
codepath is hit. Not “prometheus-y” metrics (e.g.
byte counts, etc)
It should be incredibly easy to create a counter in the code and update it.
Is there any option/config to support upstream selection while VIP upstream has many wordload endpoint?
We should identify all of the “bad” cases we can possibly take and make sure to have tests for them and we can verify sane behavior.
Examples:
Sending to a closed port
Note: I tested this and we hang for ~60-90 seconds before timing out.
Sending to an unroutable address
Sending to a unresolvable VIP
… others to be investigated
This was an initial attempt in Go:
unc (p *Proxy) AssertRBAC(r *http.Request) error {
ip, dport, err := net.SplitHostPort(r.Host)
if err != nil {
return err
}
pip, err := netip.ParseAddr(ip)
if err != nil {
return err
}
wl := p.ConnectionManager.FindWorkloadByAddr(pip)
var identity string
if r.TLS != nil && len(r.TLS.PeerCertificates) > 0 {
n, err := util.ExtractIDs(r.TLS.PeerCertificates[0].Extensions)
if err != nil {
return err
}
if len(n) > 0 {
identity = n[0]
}
}
if wl.RBAC == nil {
return nil
}
var namespace string
if identity != "" {
s, err := spiffe.ParseIdentity(identity)
if err != nil {
return err
}
namespace = s.Namespace
}
log := log.WithLabels("ip", pip.String(), "ident", identity, "port", dport)
// This is from the PEP, which handles this already
// TODO: make this check more robust
if wl.RemoteProxy != (netip.Addr{}) && identity == wl.Identity {
if len(wl.RBAC.Allow) == 0 {
log.Debugf("allow (no policies)")
} else {
log.Infof("allow (from remote)")
}
return nil
}
deny := rbacMatch(wl.RBAC.Deny, namespace, identity, dport)
if deny {
// TODO context
return fmt.Errorf("RBAC: deny")
}
if len(wl.RBAC.Allow) == 0 {
log.Debug("allow (no policies)")
return nil
}
allow := rbacMatch(wl.RBAC.Allow, namespace, identity, dport)
if allow {
log.Info("allow")
return nil
}
return fmt.Errorf("RBAC: no allow matched")
}
func rbacMatch(pol []*uproxyapi.Policy, namespace string, identity string, dports string) bool {
dport, _ := strconv.Atoi(dports)
for _, pol := range pol {
// TODO pol.When
ruleMatch := false
for _, rule := range pol.Rule {
rmatch := true
if rule.Namespace != "" && rule.Namespace != namespace {
rmatch = false
}
if rule.Identity != "" && "spiffe://"+rule.Identity != identity {
rmatch = false
}
if rule.Invert {
rmatch = !rmatch
}
ruleMatch = ruleMatch || rmatch
}
whenMatch := false
if len(pol.When) == 0 {
whenMatch = true
}
for _, when := range pol.When {
rmatch := true
if when.Port != 0 && when.Port != uint32(dport) {
rmatch = false
}
if when.Invert {
rmatch = !rmatch
}
whenMatch = whenMatch || rmatch
}
if ruleMatch && whenMatch {
return true
}
}
return false
}
Likely needs a lot of work though.
And make sure existing CPU profiling still works
Trying to run our existing integration tests on my linux machine in KinD (istio/istio#41633)
I have no name!@captured-v1-54f5cc6f5c-7rl45:/$ curl captured:80
curl: (6) Could not resolve host: captured
2022-10-26T18:22:29.926979Z INFO ztunnel::proxy::outbound: accepted outbound connection from [::ffff:10.244.1.12]:34202
2022-10-26T18:22:29.927015Z INFO ztunnel::proxy::outbound: Proxying to 10.96.18.21:123 using TCP via 10.96.18.21:123 type Passthrough
2022-10-26T18:23:38.889583Z WARN ztunnel::xds::client: XDS client error: gRPC error (Unknown error): error reading a body from connection: broken pipe, retrying
2022-10-26T18:23:38.889606Z INFO ztunnel::xds::client: Starting ADS client with 21 workloads
Trace should give extreme detail on every decision made for every single connection. Essentially super verbose connection logging. Follow prior art in Envoy.
There should be a mechanism to enable TRACE level logging for traffic to/from a single IP, or a single service account.
Trace should have no cost when not enabled
Using a custom-build proxy is great - but I don't think we need to lose all the integrations and support we built in pilot-agent.
This preserve integrations - and also simplifies the code in ztunnel.
In time we may decide to rewrite all that in rust as well - but one component at a time.
SOCKS is a very simple protocol, easy to implement (probably already has rust impl). This can be optionally compiled in.
It will make it very easy to do local development without eBPF/interception/root - and using common tools (virtually all
apps support an option to use a socks proxy).
Happy to work on it.
Should use a strict cipher set and TLS 1.3 only
Cheap, in memory, stores some list of past events, similar to Envoy access logs.
There should be a cheap in-memory connection log that's accessible for debugging purposes. The runtime cost of maintaining this log should be extremely low. This could be achieved by a per-thread in-memory list of events including the 5-tuple of the connection, what happened (open, close, error, pkt/bytes sent) and a timestamp. Whenever something happens on a connection, we can add to this list a note at very low cost. There could be a configurable limit to the size of the thing.
With this, when debugging a user could run a ztctl command to explore this data. They should be able to pass in an IP or 5-tuple and see all connections that have occurred recently with that configuration. There should be a watch command that shows the connections currently flowing through the system as they come and go. The debug dump tool should dump every thread's connection log so that it can be analyzed off-box by a support engineer.
Currently, we present certs and encrypt, but we
do not verify the root cert or spiffe://. We must have
this on both client/server.
Currently, the build fails since we use the FIPS boringssl which only works on amd64.
Even with the FIPS disabled though, still getting issues around poly_rq_mul.S
that we will need to resolve
Split readiness probe and admin interface; admin interface should only be over localhost or UDS.
Depends on #12; they may be 1 task depending on the impl
a\nb
HBONE Connections between zTunnels from the same source and destination Identities should be reusable across multiple underlying client connections (streams inside HBONE).
Likely a dedicated admin thread, and 1-2 worker threads.
Combinations of captured/uncaptured all work. All of our existing integration tests in ambient/ that don’t use waypoints should pass with the new zTunnel.
Automated tests of basic attacks:
SYN flood
Ping/ICMP flood
Slow loris (are we vulnerable to L7 things here?)
The libraries we use may have sufficient testing depending on the layer each attack targets.
Command line flags can be provided as part of “bootstrap” config (flags still take precedence)
Any xDS objects can be specified here
It can be used in conjunction with xDS
TUN generally requires an IP stack - in go I've used lwIP and gvisor, AFAIK lwIP is also supported in Rust ( at least Rust embedded).
It is possible to use Tun without root - by setting the owner of the tun device. This is intended for running zTunnel on VMs or
on Android, as well as in cases where Tproxy + eBPF are not available.
The admin interface should expose endpoints for:
CPU profiling
Heap profiling (#19)
Allocation profiling
Additionally, we should have tools/scripts to easily generate flame graphs from the output.
Support for pprof format will allow piggybacking on some of Envoy’s tooling.
This is already done.
slow loris attacks, spiky workloads, etc)
Should we have access logs similar to envoy?
How configurable should they be?
since ztunnel is running in standalone mode per node, any plan to support HA for it?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.