beatlabs / patron Goto Github PK
View Code? Open in Web Editor NEWThis project forked from mantzas/patron
Microservice framework following best cloud practices with a focus on productivity.
License: Apache License 2.0
This project forked from mantzas/patron
Microservice framework following best cloud practices with a focus on productivity.
License: Apache License 2.0
Some of the metrics are created by Jaeger and are not correctly setup in order to reflect a naming scheme that can be queried with ease.
The metric sync_request_latency_bucket
does not provide any information on the metrics origin so we have to use another namespace instead of sync
.
The trace.Setup()
set's up the prometheus observer where we can use a proper namespace.
metricsFactory.Namespace("sync", nil)
-> metricsFactory.Namespace(name, nil)
where name is the user defined name of the service.
e.g. with the name patron
the metric would be patron_request_latency_bucket
.
The doc does not tell what are the routes for the default endpoints created by Patron (readiness, liveness, ...). We have to dive into the code to find the routes (/alive, /ready, ...)
Add some doc to README.md:
In README
file, the provided example needs to be revised.
Notions like package aliasing
, function type implementation
, interface implementation
etc. are considered prerequisites. That makes it more difficult for developers with not enough experience in Go to start a new project and be familiarised with Patron.
Also in patron-cli
generated code, it could be useful to implement some demo routes that will be initially commented out. These routes would be fully functional if comments were removed.
These kind of practices with working examples makes newcomers productive in shorter time and framework understanding easier.
The solution would be to update current README
example with functional code out of the box.
Also, a few working routes (initially commented out) could be integrated at patron-cli
generation process.
At the moment when using Patron it is necessary to include input validation logic at the http level when using the request body.
For example:
var cmd book.AddBookCommand
err = req.Decode(&cmd)
// Here we need to add validation logic for the command
No.
I suggest adding a Validator
interface which returns []error
(or a struct which contains the same) and changing the Request.Decode
method to accept a validator, so the code will look something like this:
var cmd book.AddBookCommand
err = req.Decode(&cmd, book.AddBookCommandValidator{})
Before switching to golangci-lint we had golint which did reported missing documentation.
Golangci-lint supports golint (default disabled) but it seems that it does not report golint's errors.
We should add golint alongside golangci-lint tohave both until this is resolved.
See golangci/golangci-lint#464.
Add an extra golint step in circle ci build.
There is no way to inject context into service. It's been built inside Service's Run()
function
This is not a bug
A solution could be the ability to inject context in Run()
.
WARNING: That will be a breaking change
Sarama has now support for consumer groups and a Compatibility and API stability "promise" supporting "2 releases + 2 months" , which means now Go 1.8 through 1.11, and Kafka 1.0 through 2.0.
A hook abstraction could be made available in order to support dynamic configuration changes etc.
Service should support the SIGHUB signal.
A interface/function type should be provided for the developer to write any code he wants to support listening to this signal.
When creating a project we can specify what packages we need to go get
in order to add them to the mod file.
Examples for this could be: amqp, kafka, etc
We could optionally add the integration in the main.go so that the uses has to provide only the process func.
This is not a problem, more an enhancement. The issue is that there is no helper method for PATCH
routes like GET
, POST
etc.
So, the only way to create these kind of routes is use http.NewRoute()
.
The solution is to create helper method for creating PATCH
routes.
We could also add some more helper methods for HEAD
and OPTIONS
routes
Consumers should handle the messages synchronously one after another. Right now this is not happing when a message arrived the consumer runs on goroutine.
Create a demo application with the following code in consumer handler:
fmt.Println('message arrived')
time.Sleep(5 * time.Second)
fmt.Println("message finished")
return nil
send multiple messages to the consumer
the output will be
message arrived
message arrived
message arrived
message finished
message finished
message finished
should be
message arrived
message finished
message arrived
message finished
message arrived
message finished
We should remove the concurrency on the message that arrived.
The contribution guidelines should be update to organize contributions better and more efficient.
Readme will be update to reflect the proposed workflow.
Golangci linter is a linter aggregator which allows for much more extensive and configurable linting.
Add golangci-lint to ci docker file and use the linter in the ci file.
Add a config file for golangci-lint in the root folder.
Framework has no generic middleware stack mechanism for default http component.
We want to be able to introduce middleware to routes such as CORS middleware etc.
This is not a bug but an enhancement.
We should be able to set in main service as an OptionFunc and then in the default http component as a slice of MiddlewareFunc, that are applied on all routes chained in order after the default middlewares of auth and tracing.
We can take this feature even further:
In order to support SQS we need a way to send messages.
The most common scenario is to send messages to an SNS topic which is coupled to SQS.
SNS can also be used to fan-out, routing, etc.
Create an SNS Publisher in Patron under the trace package which supports the following:
After splitting the consumer group types in different packages #70 we can now support consumer groups on multiple topics.
The async component needs a lot of configuring in order to be setup which makes it a good candidate for the builder pattern.
Some of the options are:
Ideally we should have the std builder pattern which could look like:
cmp,err:= async.New(consumer).WithFailureStrategy(strg).WithRetries(cfg).Create()
A synchronous producer is needed in order to simplify sending messages to Kafka without having to monitor for errors in a channel.
The producer should propagate distributed traces.
This is not a problem, more an enhancement. The issue is complementary to issue #39 and adds authenticated method helpers for PATCH, HEAD and OPTIONS like POST, PUT etc.
So, the only way to create these kind of routes is use http.NewRoute().
It is not a bug.
We just add the auth helper methods for PATCH, HEAD and OPTIONS.
Patron needs to integrate with Elasticsearch to fulfill the needs of our clients
.
We have to use the official library which is getting the maintenance and support that is needed.
We should trace every call to ES as well with open-tracing while respecting the global patron tracing configuration.
In order to have logs that we can correlate through various microservice invocations, we could propagate a correlation it, like the tracing id.
Introduce a new HEADER in all supported components: HTTP, Kafka, SNS/SQS, RabbitMQ, etc.
The name of the header should be "X-CORRELATION-ID".
We use logger propagation already via the context.Context
so we could enhance this logger with another correlation id field.
I get the following error when running the tests of a specific package:
go test ./async
# github.com/prometheus/client_golang/prometheus
../../go/pkg/mod/github.com/prometheus/[email protected]/prometheus/collector.go:62:17: undefined: Metric
../../go/pkg/mod/github.com/prometheus/[email protected]/prometheus/collector.go:102:7: undefined: Metric
FAIL github.com/thebeatapp/patron/async [build failed]
However when I run:
go test ./...
It succeeds, but first updates my go.mod file :
github.com/prometheus/client_golang v0.9.1 h1:K47Rk0v/fkEfwfQet2KWhscE0cJzjgCCDBG2KHZoVno=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
+github.com/prometheus/client_golang v0.9.2 h1:awm861/B8OKDd2I/6o1dy3ra4BamzKhYOiGItCeZ740=
+github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
It a bit wired behavior, If I had to guess is something related with this one:
prometheus/client_golang#501
After adding the v0.9.2, everything works fine
I would suggest upgrading the client_golang to v0.9.2
Route have a lot of configuration and could be a good candidate for the builder pattern.
We could support creating a route with following options:
New("GET", "/user").WithStdHandler(hnd).Create()
or
New("GET", "/user").WithHandler(hnd).WithAuth(auth).WithTrace(cfg).Create()
this can be easily achieved with the OptionFunc arguments
needs to be added as an enhancement
keep backwards compatibility
proposed avro implementation to be used : https://github.com/linkedin/goavro
We should have more information on what each service does.
Add information to the examples readme file.
A previous issue #19 pointed out the need to expose some kind of "healthiness" of a async component.
We would like to that this on step further and support this foe every component of the service.
The requirements for this are the following:
We should investigate the following:
Currently the RabbitMQ consumer(async/amqp/amqp.go) uses hardcoded configuration for the exchange/queue/bindings .
This is problematic both when patron runs on a clean Rabbit installation(as the client should provide the desired configuration), but also when the Rabbit instance has been configured, as the client will fail if a queue or exchange with a given name already exists, but has different configuration(for example if the exchange type[direct, fanout, etc] is different)
We need to update amqp, to allow the clients provide the configuration
Patron does not log HTTP requests like web servers do. Think of something like Apache or Nginx access.log
where each HTTP request is logged in a standard log format.
This is not a bug but a nice to have when developing an application.
A solution would be to log each HTTP request/response information (for example Client IP, Time of Request, HTTP Status Code, Bytes Sent, etc) from some middleware. It seems this functionality can be incorporated in NewTracingMiddleware
.
General Middleware stack run after route has been resolved but this result in problem when CORS middleware is implemented.
When http.Router runs if method is OPTIONS follows a different path before resolve the route and run the handler (where we have chained the middleware stack) so middlewares never run.
Not a bug just a wrong implementation of the middleware feature.
Generic middlewares should run before route is resolved.
We need to change middleware signature as
type MiddlewareFunc func(next http.Handler) http.Handler
And use Router that implements http.Handler interface to wrap and chain the middlewares.
So middlewares should be as:
newMiddleware := func(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
...
// Next
h.ServeHTTP(w, r)
})
}
This is PATCH for the previous middleware implementation and needs further development afterwards by unifying the default middlewares of tracing and auth with this mechanism.
In version 0.19 removed the ability to consume kafka messages without using groups.
This is not a bug.
A solution would be to allow clients to consume without specifying cgroups.
The HTTP component needs a lot of configuring in order to be setup which makes it a good candidate for the builder pattern.
Some of the options are:
Ideally we should have the std builder pattern which could look like:
cmp,err:= http.New().WithSSL(cert,key).WithRoutes(rr).WithMiddleWares().Create()
We need a option to allow for custom header for decode/encode payload over HTTP.
Our service has a health endpoint for deploying our service in staging and production environments. It also relies upon Kafka for receiving critical data for its operation. Thus the connection with Kafka nodes determines our service readiness.
So far our readiness detection relied upon receiving the first message from a Kafka topic:
func (kc *kafkaComponent) Process(msg async.Message) error {
.....
if !kc.gotMessage {
kc.ch <- true
kc.gotMessage = true
}
note: kc.ch
notifies our health component
However, in environments with low traffic or zero traffic, that strategy caused problems with our existing deployment tools.
In order for our service to respond correctly to readiness requests, we need a mechanism that notifies heath component when we are successfully connected to kafka, not when we receive the first message.
Unfortunately I could not find a workaround without requiring additions in the patron framework.
Due to blocking nature of run and the retry strategy mechanism introduced in async component, there is no way to determine if the async component is successfully content and ready to consume.
I think the simplest solution would be to introduce a readiness handler option for the async component:
cmp, err := async.New(
"kafka-topic",
kafkaCmp.Process,
cf,
async.ConsumerRetry(retries, retryTimeout)
async.ReadinessHandler(func() {
kc.ch <- true
})
)
Apart for service readiness this handler could be useful in other cases such as debugging and logging.
While working on the examples that Patron provides (refer to this: https://github.com/beatlabs/patron/tree/master/examples/first
) I observed that the metrics have an issue. More specifically, code status counters and request counters do not increase as they should be. The issue was observed in v0.23 and v0.24 of Patron.
When I hit the POST
localhost:50000/
, I receive a 500 Internal Server Error
. The counter for sync_requests{endpoint="POST-/",error="false"}
is increased by one (here we expected an increase to error="true"
). Furthermore, we do not have an increase of the sync_http_requests{endpoint="POST-/",status_code="5xx"}
counter. I observed the results through localhost:50000/metrics
.
We have to introduce readiness and liveness endpoint to support checks from kubernetes and other systems.
The liveness indicates that the service is running correctly. Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.
The endpoint will, by default return 200 and will let the developer change this to 500 on demand.
The actual endpoint will be /alive.
The readiness indicates that the service can accept incoming traffic. The endpoint will, by default return 200 and will let the developer change in order to transition from 500 to 200.
The endpoint for readiness will be /ready.
The existing /health endpoint will be removed.
There is already a issue https://github.com/thebeatapp/patron/issues/24 which tries to introduce the healthiness concept in components. If the above works as design we should not implement the change in the component since it will introduce a lot of unneeded complexity.
There is a route in Patron to handle health check but there is no implementation of a composite health check worker that will monitor multiple health checks at once.
Composite health check accepts N functions, each reporting about the health status of some particular part of the system. If any of them start to fail, the composite also fails the health check.
This composite checker provides a single function that is fed to Patron for a default http healthcheck endpoint.
The idea of how it could be done is here: https://github.com/taxibeat/canary-service/tree/master/internal/health
https://github.com/taxibeat/canary-service/blob/master/internal/health/health.go#L71
I'd only migrate timer-based implementation to channel-based so that functions can recover if needed, and the decision whether to crash a service will be delegated to a client.
Our CI reports a failure when trying to run"integration" tests.
{"lvl":"info","host":"2ec8dafb2a8d","srv":"test","ver":"1.0.0","time":"2019-02-25T07:03:50.661389192Z","src":"http/component.go:98","msg":"HTTP component listening on port 50000"}
--- FAIL: TestServer_Run_Shutdown (0.21s)
--- FAIL: TestServer_Run_Shutdown/success (0.10s)
:1:
Error Trace: service_test.go:58
Error: Received unexpected error:
listen tcp :50000: bind: address already in use
Test: TestServer_Run_Shutdown/success
FAIL
coverage: 89.3% of statements
It seems that a previous test service was not shutdown completely. We could run every test on another port to avoid port collission.
Link provided in README examples for AMQP Component
points to the wrong example.
This is not a bug.
Update link to point to the correct example
Add an OptionFunc in the kafka package, which will set the number of retries and the sleep interval of the sarama producer
The patron service is a very complex struct and is build at this point by injecting dependencies.
We could use the builder pattern to make the API much more simple, user friendly and intuitive.
We could have something like the following:
s,err:=NewBuilder().WithKafkaConsumer(...).WithAmqpConsumer(...).WithRoutes(...).Build()
Given the above we could clean up the service, options and others e.g. routes, middlewares might not need to be a field of the service struct.
In order to support the consumer features better, we should split up the consumer types (consumer group and simple).
This will enable us to create the consumer group with support for multiple topics.
The package structure should be the following:
where kafka root contains all the common code and group and simple only the one required for consumer type.
One of the ways to design routes in your app is to declare URL mappings in routes themselves, and then pass all of them to Patron.
So it looks something like this (pseudocode):
catHTTP := NewCatHTTP()
dogHTTP := NewDogHTTP()
var catRoutes, dogRoutes []http.Route
catRoutes = catHTTP.Routes()
dogRoutes = catHTTP.Routes()
patron.New( ... patron.Routes(
combineRoutes(catRoutes, dogRoutes)
) ... )
...
func combineRoutes(rrs ...[]http.Route) ([]http.Route, error) { ... }
The proposal is to implement this method in Patron.
Every component in Patron returns some information and along with information from the service itself is then accessible via HTTP GET /info.
It seems that this feature:
Remove the info feature from all code base.
Patron currently uses one environmental variable to set the jaeger agent address.
This discourages the kubernetes configuration from using a dynamic assignment to the agent address because the agent address which works as a daemon is different for each pod.
Split the address into two variables that define the host and port. This would make the configuration dynamic.
The metric package exists in order to have a unified namespace on metrics. The namespace for that was the name of the service that was supplied when creating a service.
Prometheus, when scraping the metrics through HTTP, attaches the scrape job name, ip and maybe other metadata which makes the the unified namespace useless.
Go metrics, that are created by the prometheus client, follow this rule also.
THIS IS A BREAKING CHANGE AND AFFECTS CODE AND DEPLOYED DASHBOARDS.
TL;DR
Message key is hard-coded nil, this disables the utilisation of Kafka message keys.
Let's analyse an example, i have 2 services that communicate through a topic.
The first service produces multiple requests to the topic.
The second service consumes the requests and acts upon them.
In this particular case it is important that the order of the requests is maintained upon consumption.
In order to achieve that goal, message keys must be utilised so that the producer can place the message on a partition based on that key, (in order to be able to enforce message order on a partition level) .
This is not a bug, it is a feature request in the form of an enhancement.
The createProducerMessage() on kafka.go receives a pointer on a Message struct.
That Message struct should include a key property that could optionally be filled with a key.
That key will be passed on the sarama.ProducerMessage key property.
In order to improve resilience to data loss we need to change the default level of acknowledgement reliability (WaitForLocal).
We need to add an OptionFunc to expose sarama's Producer.RequiredAcks option. The different values allowed are:
trace SQL contains the QueryRow method which does not defer the span success.
Various span metadata are wrong or missing due to copy-paste.
Add a defer.
Support for AWS SQS as a consumer of messages would give the opportunity to use this service as an alternative for Kafka, AMQP.
Use the AWS go SQS sdk to implement a consumer for Patron.
Following things are required:
All HTTP routes are measured have prometheus metrics except /health and /metrics and they too are technically HTTP routes
Not a bug
Monitor the /health
and /metrics
routes the same way Patron tracks other routes
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.