easegress-io / easegress Goto Github PK
View Code? Open in Web Editor NEWA Cloud Native traffic orchestration system
Home Page: https://megaease.com/easegress/
License: Apache License 2.0
A Cloud Native traffic orchestration system
Home Page: https://megaease.com/easegress/
License: Apache License 2.0
The pid file feature is used it seems only for the example scripts. Overall these example scripts are a bit of a handful to parse at first glance, and may be easier for individuals coming into the project if it were a docker-compose operation instead
He folks I was reviewing this for possible use in another project and realized that the iris framework probably should be abandoned. As such I've made a quick pr (there is a few corners cut in the wait groups and signal handling) to bring the codebase under the go-chi http framework.
Please let me know if there is anything missing for this request
#23
WebSocket
is a widely used protocol for a full-duplex communication solution between client and server. It relies on TCP.[1]WebSocket
,e.g., NGINX[2], Traefik, and so on.WebSocketServer
as an BusinessController to Easegress. (Should treat it as an HTTPServer
like traffic gate in the future. )github.com/gorilla/websocket
as the WebSocket
client implementation, since it has rich features supproted and a quite active community.(15k star/2.5k fork) Comparing to the original golang.org/x/net/websocket
package, it can receive fragmented message
and send close message
.[3]kind: WebSocketServer
name: websocketSvr
https: false # client need to use http/https firstly for connection upgrade
certBase64:
keyBase64:
port: 10020 # proxy servers listening port
backend: ws://localhost:3001 # the reserved proxy target
# Easegress will exame the backend URL's scheme, If it starts with `wss`,
# then `wssCerBase64` and `wssKeyBase64` must not be empty
wssCertBase64: # wss backend certificate in base64 format
wssKeyBase64: # wss backend key in base64 format
+--------------+ +--------------+ +--------------+
| | 1 | | 2 | |
| client +--------------->| Easegress +--------------->| websocket |
| |<---------------+(WebSocketSvr)|<---------------+ backend |
| | 4 | | 3 | |
+--------------+ +--------------+ +--------------+
The WebSocketServer
shares some configurations with HTTPServer
. Since WebSocket
runs on HTTP 1.1
only, and it works on opening two long-life goroutines for the reverse proxy scenario, it's quite different from the original Easegress' HTTPServer+Pipeline's process. So I decide not to implement it into HTTPServer
and catalog it into TrafficGate
in the first version right now.
I will implement this WebSocketServer
first version and try to figure out how to integrate with the original HTTPServer
at the same time. (Moreover, multiple protocols supporting requirements, such as MQTT
, TPC
, are on the way... )
This may be covered by FAAS, but I'm still reading the docs and I haven't seen this out there and I had the thought.
For measured rates of incoming traffic, or any metric really, a trigger based on a moving average deviation or some other calculation could allow teams the ability to hit a webhook for alarm or automation events.
If you create an http pipeline without a loadBalance definition, like is given here, you will receive a panic
payload
name: something
kind: HTTPPipeline
flow:
- filter: proxyToBackend
filters:
- name: proxyToBackend
kind: Proxy
mainPool:
servers:
- url: some backend
client error
Error: 400: validate spec failed:
generalErrs:
- |
filters: jsonschemaErrs:
- 'mainPool: loadBalance is required'
systemErr: 'BUG: call Validate for proxy.PoolSpec panic: runtime error: invalid memory
address or nil pointer dereference'
exit status 1
server error
2021-06-23T19:09:03.093Z ERROR v/validaterecorder.go:145 BUG: call Validate for proxy.PoolSpec panic: runtime error: invalid memory address or nil pointer dereference: goroutine 305224 [running]:
runtime/debug.Stack(0x17c4de1, 0x23, 0xc000f56250)
/usr/local/go/src/runtime/debug/stack.go:24 +0x9f
github.com/megaease/easegress/pkg/v.(*ValidateRecorder).recordGeneral.func1(0x7f8b8c218db8, 0xc00131a600, 0xc0015ebb00)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/v/validaterecorder.go:145 +0xf5
panic(0x1584bc0, 0x23145b0)
/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/megaease/easegress/pkg/filter/proxy.PoolSpec.Validate(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc000548928, 0x1, 0x1, 0x0, ...)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/filter/proxy/pool.go:89 +0xbf
github.com/megaease/easegress/pkg/v.(*ValidateRecorder).recordGeneral(0xc0015ebb00, 0xc0022ac0e0, 0x0)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/v/validaterecorder.go:150 +0x14b
github.com/megaease/easegress/pkg/v.traverseGo(0xc0022ac0e0, 0x0, 0xc000f56d58)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/v/v.go:185 +0xf2
github.com/megaease/easegress/pkg/v.traverseGo(0xc0022ac0a0, 0xc00103db20, 0xc000f56d58)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/v/v.go:214 +0x7a5
github.com/megaease/easegress/pkg/v.traverseGo(0xc001fb1fc0, 0x0, 0xc000f56d58)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/v/v.go:198 +0x657
github.com/megaease/easegress/pkg/v.traverseGo(0xc001fb1be0, 0x0, 0xc000f56d58)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/v/v.go:214 +0x7a5
github.com/megaease/easegress/pkg/v.Validate(0x158f700, 0xc000fc6850, 0xc0004bf030, 0x66, 0x70, 0x0)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/v/v.go:118 +0x77d
github.com/megaease/easegress/pkg/object/httppipeline.newFilterSpecInternal(0xc00229f230, 0xc001fab2c0, 0xc0021c6ab0, 0x7)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/object/httppipeline/spec.go:91 +0x457
github.com/megaease/easegress/pkg/object/httppipeline.Spec.Validate(0xc0019e26e0, 0x1, 0x1, 0xc0005486c0, 0x1, 0x1, 0xc000ed4c00, 0xafb, 0xc00, 0x0, ...)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/object/httppipeline/httppipeline.go:251 +0x2b3
github.com/megaease/easegress/pkg/v.Validate(0x15538a0, 0xc00229e750, 0xc000ed4c00, 0xafb, 0xc00, 0xc00)
/go/pkg/mod/github.com/megaease/[email protected]/pkg/v/v.go:107 +0x836
github.com/megaease/easegress/pkg/supervisor.NewSpec(0xc0005b1000, 0xafb, 0xafb, 0xc0005b1000, 0xafb)
Some notes on this
Receiving this warning message when executing the server code locally
WARNING: Package "github.com/golang/protobuf/protoc-gen-go/generator" is deprecated.
A future release of golang/protobuf will delete this package,
which has long been excluded from the compatibility promise.
Hey folks, wanted to raise a discussion around joining the CNCF as a way to push strong governance in general but perhaps also gain some visibility for the project. There is a good breakdown on the requirements that would need to be met here https://github.com/cncf/toc/blob/main/process/graduation_criteria.adoc
As we know, canary deployment is a very useful feature to help customers verify the changes from the customer side.
To support this feature well, we need more strategies to do traffic coloring work.
The following configuration is proposed.
HTTP Request Header Coloring
Access Token - JWT or HMAC
Notes: all of the traffic coloring strategies must be based on the user-side instead of the server-side.
Meanwhile, we also need a couple of statistics for specific coloring rules.
Like Nginx, embed variables[1] will make users easy to configure, furthermore, it will bring greater flexibility for configurations.
For example:
When we want to set an HTTP header in a requestAdaptor
filter, it better to write
set:
Host: $host
then
set:
Host: "xxxx.com"
I strongly recommend we introduce the variable mechanism.
[1] https://nginx.org/en/docs/http/ngx_http_core_module.html#variables
This error occurs when I use mirrorPool.Because func (mr *masterReader) Read(p []byte) (n int, err error)
repeats the Clone method twice.
I have made some changes to this for reference only:
func (mr *masterReader) Read(p []byte) (n int, err error) {
buff := bytes.NewBuffer(nil)
tee := io.TeeReader(mr.r, buff)
n, err = tee.Read(p)
if n != 0 {
mr.buffChan <- buff.Bytes()
} else {
close(mr.buffChan)
}
// if err == io.EOF {
// close(mr.buffChan)
// }
return n, err
}
T think ectd clients and peers is already closed in *embed.Etcd.Close(), there is no need to close it again.
Sometimes, we need a fake HTTP API for mock tests. This is a new requirement for a mock filter that can return the configured status code and response body.
Seeing a bit of duplication I think in the help message output for server and over-all this is way too overwhelming of a help page but that can be addressed elsewhere. See version here printed 2 times, as well as others
Usage of ./easegress-server:
--api-addr string Address([host]:port) to listen on for administration traffic. (default "localhost:2381")
--cluster-advertise-client-urls strings List of this member’s client URLs to advertise to the rest of the cluster. (default [http://localhost:2379])
--cluster-initial-advertise-peer-urls strings List of this member’s peer URLs to advertise to the rest of the cluster. (default [http://localhost:2380])
--cluster-join-urls strings List of URLs to join, when the first url is the same with any one of cluster-initial-advertise-peer-urls, it means to join itself, and this config will be treated empty.
--cluster-listen-client-urls strings List of URLs to listen on for cluster client traffic. (default [http://localhost:2379])
--cluster-listen-peer-urls strings List of URLs to listen on for cluster peer traffic. (default [http://localhost:2380])
--cluster-name string Human-readable name for the new cluster, ignored while joining an existed cluster. (default "eg-cluster-default-name")
--cluster-request-timeout string Timeout to handle request in the cluster. (default "10s")
--cluster-role string Cluster role for this member (reader, writer). (default "writer")
-f, --config-file string Load server configuration from a file(yaml format), other command line flags will be ignored if specified.
--cpu-profile-file string Path to the CPU profile file.
--data-dir string Path to the data directory. (default "data")
--debug Flag to set lowest log level from INFO downgrade DEBUG.
--force-new-cluster Force to create a new one-member cluster.
--home-dir string Path to the home directory. (default "./")
--labels stringToString The labels for the instance of Easegress. (default [])
--log-dir string Path to the log directory. (default "log")
--member-dir string Path to the member directory. (default "member")
--memory-profile-file string Path to the memory profile file.
--name string Human-readable name for this member. (default "eg-default-name")
-c, --print-config Print the configuration.
-v, --version Print the version and exit.
--wal-dir string Path to the WAL directory.
--api-addr string Address([host]:port) to listen on for administration traffic. (default "localhost:2381")
--cluster-advertise-client-urls strings List of this member’s client URLs to advertise to the rest of the cluster. (default [http://localhost:2379])
--cluster-initial-advertise-peer-urls strings List of this member’s peer URLs to advertise to the rest of the cluster. (default [http://localhost:2380])
--cluster-join-urls strings List of URLs to join, when the first url is the same with any one of cluster-initial-advertise-peer-urls, it means to join itself, and this config will be treated empty.
--cluster-listen-client-urls strings List of URLs to listen on for cluster client traffic. (default [http://localhost:2379])
--cluster-listen-peer-urls strings List of URLs to listen on for cluster peer traffic. (default [http://localhost:2380])
--cluster-name string Human-readable name for the new cluster, ignored while joining an existed cluster. (default "eg-cluster-default-name")
--cluster-request-timeout string Timeout to handle request in the cluster. (default "10s")
--cluster-role string Cluster role for this member (reader, writer). (default "writer")
-f, --config-file string Load server configuration from a file(yaml format), other command line flags will be ignored if specified.
--cpu-profile-file string Path to the CPU profile file.
--data-dir string Path to the data directory. (default "data")
--debug Flag to set lowest log level from INFO downgrade DEBUG.
--force-new-cluster Force to create a new one-member cluster.
--home-dir string Path to the home directory. (default "./")
--labels stringToString The labels for the instance of Easegress. (default [])
--log-dir string Path to the log directory. (default "log")
--member-dir string Path to the member directory. (default "member")
--memory-profile-file string Path to the memory profile file.
--name string Human-readable name for this member. (default "eg-default-name")
-c, --print-config Print the configuration.
-v, --version Print the version and exit.
--wal-dir string Path to the WAL directory.
(once we come to a conclusion, we will take the corresponding action immediately, feel free to join our discussion :-) )
[1] https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
[2] https://lab.github.com/githubtraining/github-actions:-continuous-integration
Currently, the architecture shows below.
It has some problems:
So we decide to use a dedicated controller to manage them. Here's the new architecture:
We got the benefits:
We used UNIX-like operating system syscall[1] to get time for getting better performance, which caused EG can't be compiled in Windows. As EG has been opening source, we should support Windows-compatible.
In reverse proxy scenarios, especially in the dev environment, an HTTPServer will bind multiple domains, each domain has its own certs. The current configuration of Easegress only support one HTTPserver
with one cert, which can't handle the general situation in reverse proxy scenarios
The Service Registry used internally by our company is currently etcd.
I see that there is an EtcdServiceRegistry. But the data structure we store now does not correspond to the structure needed to access the registry. So I want to develop a service discovery that can parse our structure. I understand that if I add my code directly to the source code, I need to recompile to use my service discovery.
So, I want to ask if there are other ways to meet my needs
This line in the cli limits the tool to only work with http endpoints
https://github.com/megaease/easegress/blob/main/cmd/client/command/common.go#L108
Suggest that we do one of the following:
The Docker image as it sits is currently a single stage build which leaves a bunch of elements inside the make file which puts more effort on setting up proper development environments and can possibly lead to inconsistent builds
I propose that we
https://github.com/megaease/easegress/blob/main/release/build-client-image.sh
and https://github.com/megaease/easegress/blob/main/release/build-server-image.sh
It would be great for a deployment of easegress to be able to replicate it's configuration into another easegress instance.
This would be done as a parent, child relationship only. Leaving the child instances to carry the same configuration as the parent, and can only receive changes from the parent.
This configuration would enable easegress to be deployed in a 3 availability zone pattern in each geo location, and then joined as one global cluster
As we discussed in the technical board[1], we decide to support automating certificates in let's encrypt.
We should support features:
kind: ACME
name: acme-example
The certBase64
and keyBase64
is for imported ACME certificates.
kind: HTTPServer
name: http-server-example
acme:
email: [email protected] # required while certBase64 and keyBase64 are empty
renewTimeout: 60d #optional
caServer: https://acme-v02.api.letsencrypt.org/directory # opional [2]
domains: ['megaease.com'] # required
httpChallenge: {}
tlsChanllege: {} # conflict with httpChallenge
certBase64: ... # optional
keyBase64: ... # optional
keyType: RSA4096
# ...
The priority for certificates of HTTPServer:
[1] https://docs.google.com/document/d/1gKM6uV3zzjPdPjSdhhQ0eqROaJppgg38CduY-BOjVNU/edit#
[2] https://letsencrypt.org/docs/acme-protocol-updates/#acme-v2-rfc-8555
I always return 411 when I make a PUT request. I don't know if it's a bug in the tool.
[2021-07-28T15:11:43.267+08:00] [192.168.0.115:48252 192.168.0.115 PUT /zlin1/temp.j2 HTTP/1.1 411] [1.00507014s rx:599B tx:457B] [proxy#main#addr: http://localhost:8001 | proxy#main#code: 411 | pipeline: proxy(4.063488ms) | proxy#main#duration: 1.004979489s]
lines below are removed in #59
ci = &cacheItem{ipFilterChan: path.ipFilterChain, path: path}
rules.putCacheItem(ctx, ci)
m.handleRequestWithCache(rules, ctx, ci)
return
d6af0dc#diff-9a718d32cef26863af1d2292e908365d40b9d2f761b2bc1bbde6ec743d3b8bdd
when you run the example in readme
https://github.com/megaease/easegress#create-an-httpserver-and-pipeline
you will always get 404 because of path.Headers === nil
the egress server will never touch any backend server if no header set in config object
In the aws
environment host 'gw', we set a metrics controller to send request metrics to monitor.
We can't get eg-http-request-2021.06.27
index in ElasticSearch. Consumer from Kafka found the data that sent has a problem.
new Date(1624434775000)
Wed Jun 23 2021 15:52:55 GMT+0800 (**标准时间) {}
timestamp
is incorrect.
// A Mutex is a mutual exclusion lock.
// The zero value for a Mutex is an unlocked mutex.
//
// A Mutex must not be copied after first use.
type Mutex struct {
state int32
sema uint32
}
A mutex
must not be copied after first use.
Currently, we would publish a unified docker image when we released the new Easegress. As we have known that the Easegress can be used in many scenarios, eg, K8s ingress, controller plane of the EaseMesh, reverse proxy service, etc. If we had only one version of the Docker image, users could be hard to operate it, they need to know about all details of configurations in their usage.
For easing to use in a specific usage, I propose we provide the dedicated docker images for specific usages. We can provide a dedicated K8s ingress controller docker image which enabled the ingress controller by default, users just need to config simple parameters via arguments or environment variables. We can also provide a default mesh controller docker image ..
I see the supported application layer protocol proxy. So does TCP support? There are scenarios based on Netty's TCP persistent connection applications. IoT scenarios.
I want to set the host
header of backend request, so I leverage requestAdaptor
filter to set Host
, the configuration is:
filters:
- name: requestAdaptor
kind: RequestAdaptor
method: ""
path: null
header:
del: []
set:
Host: "xxxx.com"
X-Forwarded-Proto: "https"
but request sent to the backend with improper host
header value (actually it's the backend IP address), this isn't fulfilling my expectation.
As discussed in #79 (comment), it is possible for the current mesh informer implementation to send notifications after it is closed, this is unexpected and may cause bugs.
To fix this, we can add a wait operation in informer.Close
to ensure all notifications are sent before the function returns, the change itself is simple but we need to check the existing code to ensure the wait
does not cause a deadlock.
If the above solution is not possible, we can document the behavior and let client code handle the delayed notifications.
Our flagship product Easegress(EaseGateway) has many splendid features, it is fascinating, especially our pipeline/plugin
mechanism which empower customers to achieve their specific goal with the Easegress customizing way
But the current pipeline/plugin
mechanism still has too many barriers to use If a user really wants to extend the Easegress he needs to conquer the following issues:
pipeline/plugin
mechanismI think the last two of these barriers are the biggest obstacles for users to extend the Easegress. So I think we need another pipeline/plugin
mechanism for the EG customization.
Compare with other gateway productions, we can found they are all choosing a solution that is embedding a weak language to enhance the power of extensibility. but there are serval cons in these implementations.
If we want to provide a more flexible customization mechanism, we must solve the above disadvantages.
After several days of study, I found we can leverage WebAssembly to solve the above problems.(被打脸了……), because the WebAssembly has the following feature:
Golang has rich ecology, I found an open-source Golang WebAssembly runtime library at [1].
PS: I don't want to deprecate the current pipeline/plugin mechanism, but I think we need multiple customized abstraction, the different way to process the different scene. This solution has been adopted by Envoy as its filter's extensibility [2].
[1] https://github.com/wasmerio/wasmer-go
[2] https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/wasm-cc
Thank contributors for introducing the Github Action, but all PRs were checked failed by Github Action, apparently, the root cause of failure doesn't relate to the runtime environment of Github Action, as our unit test should be environment-independent.
The following text is a snapshot of checking output about the failure
2021-07-08T02:51:50.758Z ERROR cluster/cluster.go:679 register cluster name test-cluster failed: create client failed: couldn't open sink "/tmp/eg-test/test-member-001/log/etcd_client.log": open /tmp/eg-test/test-member-001/log/etcd_client.log: no such file or directory
panic: register cluster name test-cluster failed: create client failed: couldn't open sink "/tmp/eg-test/test-member-001/log/etcd_client.log": open /tmp/eg-test/test-member-001/log/etcd_client.log: no such file or directory
goroutine 189 [running]:
github.com/megaease/easegress/pkg/cluster.(*cluster).startServer.func2(0xc0004689a0, 0xc00003e600, 0xc0001cad60, 0xc000121090, 0xc000121098)
/home/runner/work/easegress/easegress/pkg/cluster/cluster.go:680 +0x43d
created by github.com/megaease/easegress/pkg/cluster.(*cluster).startServer
/home/runner/work/easegress/easegress/pkg/cluster/cluster.go:668 +0x35b
FAIL github.com/megaease/easegress/pkg/cluster 0.655s
All the Github Action run check normally
It's wrong to use a tmporay directory with a specific directory. I strongly recommend ioutil.TempDir(os.TempDir(), prefix)
instead of using mkdir("/tmp" + prefix)
We can easily reproduce the issue with make build-docker
, but we need slightly modify build/package/Dockerfile
with
diff --git a/build/package/Dockerfile b/build/package/Dockerfile
index eccdafa..39ff169 100644
--- a/build/package/Dockerfile
+++ b/build/package/Dockerfile
@@ -1,10 +1,10 @@
-FROM golang:alpine AS builder
-RUN apk --no-cache add make git
+FROM golang AS builder
+#RUN apk --no-cache add make git
WORKDIR /opt/easegress
COPY . .
-RUN make clean && make build
+RUN make clean && make test && make build
After upgrade the latest Easegress, I've accurately configured EG_CLUSTER_PERR_URL&EG_CLUSTER_CLIENT_URL through environment variables, in which value is URL contained DOMAIN name, but when I start it, I got :
2021-05-29T15:06:16.731Z ERROR cluster/cluster.go:190 start cluster failed (0 retries): start server failed: expected IP in URL for binding (http://easemesh-control-plane-0.easemesh-controlplane-hs.easemesh:2380)
Easegress cluster startup normally
I checked the manual in the etcd-io,:
–listen-peer-urls: List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. [1]
–listen-client-urls: List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations [2]
Both configurations are dedicated to telling members which port and interface they should listen to, configurations can be configured as HTTP://0.0.0.0:XXXX
which meanings the member can listen on all interfaces
The DNS is invalid for these configurations
So in our PR #4 , I don't think we can leverage --listen-client-urls
to judge whether a cluster is the new or not. It's the wrong implementation.
The etcd give us another configuration, I've no idea whether we support them or not. configurations is:
–initial-advertise-peer-urls: [3]
–advertise-client-urls: [4]
We can leverage --advertise-client-urls
to reimplement PR #4
PS: both configuration support environment natively, I don't know whether we support it natively.
[1] https://etcd.io/docs/v3.1/op-guide/configuration/#--listen-peer-urls
[2] https://etcd.io/docs/v3.1/op-guide/configuration/#--listen-client-urls
[3] https://etcd.io/docs/v3.1/op-guide/configuration/#--initial-advertise-peer-urls
[4] https://etcd.io/docs/v3.1/op-guide/configuration/#--advertise-client-urls
MQTTProxy
as an BusinessController to Easegressgithub.com/eclipse/paho.mqtt.golang/packets
to parse MQTT packet. paho.mqtt.golang
is a MQTT 3.1.1 go client introduced by Eclipse Foundation (who also introduced the most widely used MQTT broker mosquitto).kind: MQTTProxy
name: mqttproxy
port: 1883 # tcp port for mqtt clients to connect
backendType: Kafka
kafkaBroker:
backend: ["123.123.123.123:9092", "234.234.234.234:9092"]
useTLS: true
certificate:
- name: cert1
cert: balabala
key: keyForbalabala
- name: cert2
cert: foo
key: bar
auth:
# username and password for mqtt clients to connect broker
- userName: test
passBase64: dGVzdA==
- userName: admin
passBase64: YWRtaW4=
topicMapper:
# detail described in following
matchIndex: 0
route:
- name: gateway
matchExpr: "gate*"
- name: direct
matchExpr: "dir*"
policies:
- name: direct
topicIndex: 1
route:
- topic: iot_phone
exprs: ["iphone", "xiaomi", "oppo", "pixel"]
- topic: iot_other
exprs: [".*"]
headers:
0: direct
1: device
2: status
- name: gateway
topicIndex: 3
route:
- topic: iot_phone
exprs: ["iphone", "xiaomi", "oppo", "pixel"]
- topic: iot_other
exprs: [".*"]
headers:
0: gateway
1: gatewayID
2: device
3: status
More detail about topic mapping, in MQTT, there are multi-levels in topic, for example, direct/iphone/log
. Topic mapping is used to map MQTT topic to Kafka topic with headers.
Consider there may be multiple schemas for your MQTT topic, so we first provide a router to route your MQTT topic to different mapping policies and then do the map in that policy.
For example,
...
matchIndex: 0
route:
- name: gateway
matchExpr: "gate*"
- name: direct
matchExpr: "dir*"
...
means that we use MQTT topic level 0 to match matchExpr
to find corresponding policy. In this case, gateway/gate123/iphone/log
will match policy gateway
, direct/iphone/log
will match policy direct
.
More example about topic mapper:
use yaml above:
MQTT topic pattern1: gateway/gatewayID/device/status -> match policy gateway
MQTT topic pattern2: direct/device/status -> match policy direct
example1: "gateway/gate123/iphone/log"
Kafka
topic: iot_phone
headers:
gateway: gateway
gatewayID: gate123
device: iphone
status: log
example2: "direct/xiaomi/status"
Kafka
topic: iot_phone
headers:
direct: direct
device: xiaomi
status: status
example3: "direct/tv/log"
Kafka
topic: iot_other
headers:
direct: direct
device: tv
status: log
Empty topicMapper
means there are no map between MQTT topic and Kafka topic.
Client Server Kafka
+--------+ +--------+ +-------+
| |1. connect | | | |
| |----------->| | | |
| | | check | | |
| |2. connack | info | | |
| |<-----------| | | |
| | | | | |
| | | | | |
| |3. publish | |4. send to | |
| |----------->| topic | kafka | |
| | |transfer|---------->| |
| |3.1 puback | | | |
| |if qos=1 | | | |
+--------+ +--------+ +-------+
other request like pingreq/pingrsp, subscribe/unsubscribe is quite similar.
Now, we make any MQTT/3.1.1 client able to connect to our server and publish message to kafka backend and we also achieve some related control logic like keep alive, client takeover etc.
We also add session module, which allow us to keep MQTT client subscribe information and publish message to MQTT client. The message publish to MQTT client will come from our HTTP server.
We now support backend to send messages back to MQTT clients through http endpoint.
API for http endpoint:
http://127.0.0.1
:2381
apis/v1/mqttproxy/{name}/topics/publish
, where name is the name of MQTT proxy{
"topic": "yourTopicName",
"qos": 1, // currently only support 0 and 1
"payload": "dataPayload",
"base64": false
}
To send binary data, you can encode your binary data base64 and send base64
flag to true
. Your client will receive the original binary data, we will do the decode.
The HTTP endpoint schema also work for multi-node deployment. Say you have 3 Easegress instances called eg-0
, eg-1
, eg-2
, and your MQTT client connects to eg-0
, if you send messages to eg-1
, your client will receive the message too.
We also support wildcard subscription.
For example,
POST http://127.0.0.1:2381/apis/v1/mqttproxy/mqttproxy/topics/publish
{
"topic": "Beijing/Phone/Update",
"qos": 1, // currently only support 0 and 1
"payload": "time to update",
"base64": false
}
the clients subscribe following topics will receive the message:
"Beijing/Phone/Update"
"Beijing/+/Update"
"Beijing/Phone/+"
"+/Phone/Update"
"Beijing/+/+"
"Beijing/#"
"+/+/+"
Easegress now could be configured as a K8s ingress controller, however, users must manually send a command to Easegress to apply the spec, this is not a user-friendly fashion and greatly increases the difficulty of the K8s ingress controller deployment process.
Add new command-line arguments to direct Easegress to load more spec files at startup, and create objects according to these specs.
To make specs of an Easegress cluster consistent, the specs need to be saved into Etcd as if they are applied by command.
Because after an Easegress instance started, object specs could be updated dynamically, which results in Etcd contains newer copies of the specs than the files, Easegress should use the copy from Etcd if the spec of an object exists both in specs files and Etcd.
The code coverage https://app.codecov.io/gh/megaease/easegress seems to be less than 60%.
It would be great if it can be brought up around 80-90% and ideally a 100%.
As the WebAssembly filter is almost done, users will be able to leverage WebAssembly to extend the ability of Easegress soon.
Easegress has provided a set of host functions for the interoperation between Easegress and user-developed WebAssembly code, and because it is really painful to write WebAssembly code directly, Easegress also needs to provide SDKs for different high-level languages to wrap the obscure things.
The 1st language we want to support is JavaScript, the most popular one of today. But we cannot do that because there's still not a good compiler to compile JavaScript to WebAssembly.
The alternative choice for JavaScript is AssemblyScript, which is a variant of TypeScript, has similar syntax with JavaScript and was designed for WebAssembly.
Many applications require multiple configurations to be created, such as an HttpServer
and a PipelineFilter
. Management of multiple configurations that grouped together in a single file (separated by --- in YAML) will bring huge simplicity to users.
like Kubernetes [1]
[1] https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/
The proxy and other resources making egress calls should have the ability to cache DNS lookups, or even use static DNS entries in instances where DNS stability or replication may be questioned
func init() {
// FIXME: Rewrite APIAggregator because the HTTPProxy is eliminated
// I(@xxx7xxxx) think we should not empower filter to cross pipelines.
// httppipeline.Register(&APIAggregator{})
}
I saw this in code, Is this function still to be developed? I think it is very useful !Thanks
Controller
called Function
for abstracting the communication between Easegress's HTTP server and Function provider(Knative) locally with the help of a separate FaaSServer binary.name: faascontroller-1
kind: Function
provider: knative # FaaS provider kind, currently we only support Knative
httpServer:
http3: false
port: 10083
keepAlive: true
keepAliveTimeout: 60s
https: false
certBase64:
keyBase64:
maxConnections: 10240
knative:
functionHost: 127.0.0.1.xip.io # the host for Knative functions shared part, FaaSController will forward ingress traffic to this host combining with function's name
name: demo
image: dev.local/colorddeploy:14.0 # the function logic, currently we only support HTTP protocol
autoScaleType: concurrency # there are three kinds of scale type, concurrency/rps/cpuPercent
autoScaleValue: "300" # the scaling metrics value
minReplica: 1 # the function's minimum instance number
maxReplica: 5 # the function's maximum instance number
limitCPU: 200m # the function's maximum CPU resource ( the same in K8s's Pod resource limitation)
limitMemory: 80Mi # the function's maximum memory resource ( the same in K8s's Pod resource limitation)
requestCPU: 100m # the function's minimum CPU resource ( the same in K8s's Pod resource request)
requestMemory: 40Mi # the function's minimum memory resource ( the same in K8s's Pod resource request)
requestAdaptor: # optional, indicate how FaasController should transform original HTTP request to FaaSProvider's needed format
method: POST # change HTTP method to `POST` for visiting FaaSProvider's function
path:
replace: / # replace HTTP Path to '/', also support Regexp replacing, trimming or adding.
header:
set:
X-Func: func-demo # add one HTTP header
There four types of function status, Pending, Active, InActive, and Failed[2]. Basically, they come from AWS Lambda's status.
pending
. After the function had been provisioned successfully by FaaSProvider(Knative), its status will become active
. At last, FaaSController will add the routing rule in its HTTPServer for this function.active
not matter there are requests or not. Stop function execution by calling FaaSController's stop
RESTful API, then it will turn function into inactive
. Updating function's spec for image URL or else fields, or deleting function also need to stop it first.pending
status. (Once provision successfully, it will become active
automatically)inactive
status. (Then the client will receive HTTP 404 failure response. For becoming zero-downtime, deploy another new FaaSfunction in this FaaSController may be helpful) +---------------+ (provision successfully) +-------------+
| |------------------------------>| |
| pending | | active |<------+
| | | | |
+-+---------------+ <--+ +--------------------+--------+----+ |
| | | | |
| error | | |stop | start
| update | | error | |
| +---------------+ | | +-------------+ | |
+->| | | | | | | |
| failed |<--+-----+--------+ inactive |<-----+ |
| | | +-------------------+
| | | |
+---------------+ +-------------+
The RESTful API path obey this design http://host/{version}/{namespace}/{scope}(optional)/{resource}/{action}
,
Operation | URL | Method |
---|---|---|
Create a function | http://eg-host/apis/v1/faas/faas-controller1 | POST |
Start a function | http://eg-host/apis/v1/faas/faas-controller1/demo1/start | PUT |
Stop a function | http://eg-host/apis/v1/faas/faas-controller1/demo1/stop | PUT |
Update a function | http://eg-host/apis/v1/faas/faas-controller1/demo1 | PUT |
Delete a function | http://eg-host/apis/v1/faas/aas-controller1/demo1 | DELETE |
Get a function | http://eg-host/apis/v1/faas/faas-controller1/demo1 | GET |
Get function list | http://eg-host/apis/v1/faas/faas-controller1 | GET |
You know what would be rad? A traffic doubler. It would have to be blind, or wait for the responses and perform some sort of logic to pick a responder, merge the responses or otherwise.
Stacking that in front of a filter chain that altered data to effectivity scrub or cleanse it via regex or other patterns (one filter per regex. Filters should be one feature one function yeah?)
This would allow teams to test a/b systems in the live finally
Steps to reproduce:
We have evaluated different WebAssembly engines like wasmer, wasmtime, WasmEdge, aWsm and etc during the preparation for #1 , and we choose wasmtime at last because it is the only one which supports all the required features. However, we think wasmtime is an acceptable choice than the best choice because we observed virtual memory leaks, and it requires Cgo (Cgo is a common issue of Wasm engines).
On the other hand, as a new and rapid evolution technology, features implemented by Wasm engines also vary from each other. Although Easegress as a Wasm host only requires a limited set of Wasm features, it should provide the maximum flexibility for users to use any Wasm features.
So our target is that Easegress can use different Wasm engines.
We define an interface between Easegress and Wasm engines, and the implementations of the interface are Wasm drivers. By switching between different drivers, Easegress and user Wasm code can adapt different Wasm engines without any changes. Because it much easier for developers to implement an interface than write code for Easegress (which requires developers to understand the Easegress architecture), the drivers can be developed by MegaEase, Wasm engine vendor, or 3rd party developers.
Basically, the driver interface should include the following features:
All Wasm engines already support the first 4 items while differ much in implementation details, it is the drivers' responsibility to convert them to the unified interface. For the last one, we only known wasmtime has the support currently (that's why we choose it for #1 ), other Wasm engines must implement it before the driver can be developed.
There left some reference documentation to write, we need to complete them:
Much like the custom resource definition in kubernetes, it would be valuable as an end user to have a way to create a object definition which would represent one or more backend processes.
This feature would allow teams to aggregate not only apis, but entire business functions into objects that can be rectified against real state in cloud native manners.
Backend processes should be asynchronous and push updates to the object as state changes, only and ever without exception
An example flow would be:
Theres a few logs that get folded into files which limits visibility and due to the file write we break the immutability guarantee of docker as we as forcing it to perform copy on write operations
Proposal:
See here: #68
We have developed an ingress controller in EaseMesh, but it is an EaseMesh-dedicated implementation. So we decide to implement a general Kubernetes ingress controller.
The ingress controller consumes Kubernetes Ingress Resources and converts them to an EaseGress configuration which allows the EaseGress to forward and load-balance traffic to Kubernetes pods.
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: easegress
spec:
controller: megaease.com/ingress-controller
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
ingressClassName: easegress
rules:
- host: test.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: test
servicePort: 8000
the ingress above will be translated to one HTTPServer and one HTTPPipeline
kind: HTTPServer
name: http-server-ingress
port: 10080 # Indicated in the installation
keepAlive: true
keepAliveTimeout: 75s
maxConnection: 10240
rules:
- host: test.example.com
paths:
- path: /
backend: test
name: test
kind: HTTPPipeline
flow:
- filter: proxy
filters:
- name: proxy
kind: Proxy
mainPool:
servers: # retrieved from k8s
- url: http://10.0.1.1:19095
- url: http://10.0.1.2:19095
loadBalance:
policy: roundRobin
I want to deploy a easegress cluster with multiple members.
Have any suggestions for the deployment of high-availability mode?
PS:I want to deploy many members, will there be some problems with raft?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.