Some spike stuff in preparation of the final solution to the Note product that will be the backend for one or more Miruvor SaaS offerings.
Is the way of working for Spikes. It's an extension of event driven:
Request -> Command -> Event -> Entity -> Response
- Request is the instruction from the outside world to perform some task, but not yet a Command,
- After a Request is validated as suitable for a Command, it is transformed into a Command and the callback actor is added,
- An EventSourcedBehavior handles a Command and checks if the current State is suitable for the Command to be processed,
- If 3 is okay, then the Command is transformed into an Event, persisted and the State is updated accordingly,
- The Event and the State are used to update that State into a new State with the Entity from the Event,
- The requester from step 1 is informed about the result of steps 2..5 in the form of a http status code and the Response from the Entity.
Requests are either a Create, Update or Delete instruction with the required data. Queries are defined later and return a response of the required entity or a list of these responses.
From the above and with Akka Persistence we get an architecture that is basically a Handlers container with a Command Handler and an Event Handler:
request via akka-http-route ->
validator ->
command-handler ->
event-handler &
respond with updated state to requestor
to the outside world, this seems like the ubiquous http request
-> response
way of working.
Akka Persistence is responsible for persisting the events created by the command handler. And Akka persistence will replay the previously persisted events on (re)startup of a Persistent Actor (actually Behaviour, but I still like Actor better).
The validator
and the reply to requestor
is the reason for the Request instead of the requestor just sending a Command.
- Akka Typed
- Akka http
- Circe for json processing via Heiko's lib
- Akka Typed persistence for Event Sourcing
- Kryo for serialization
- Cassandra for Akka persistence
- ULIDs for unique, sortable IDs
- TSID: faster and less resource consuming replacement for ULID
- Chimney for case class transformations
- Scalactic for triple equals with type safety
- Scala URI to validate URLs
See Implementing µ-services with Akka
- Password regex (MKYong)
- Akka http Directives (ProgramCreek)
- Akka http json (Heiko)
- ScalaTest
- SBT jib
- Akka http tools (sbus labs)
- Akka http validation
- Akka http metrics
- Akka http OAuth2
- AirFrame ULID
- Circe and ULID
- Complete Example
- TSID; very interesting: both UUID and ULID are too long and slow, use TSID instead.
The initial idea was to build a simple note-taking backend. Gradually some new features crept in, such as Tasks, Events and Log/Journal entries. That scope-creep led to the idea of creating a simple CRM system, with Users logging their activities with Employees of Companies. This backend will focus on storing the data created by the users in a traceable, retrievable and recoverable way. The integration with email and other external systems is not included yet.
In the end I produced a simple note-taking backend. All aspirations for more functionality seem over the top at the moment. With the current implementation of Note all the considered extra's can be implemented at the frontend.
kubectl create configmap name --from-literal=SECRET_KEY=$ENV_VAR --from-literal=OTHER_SECRET_KEY=$OTHER_ENV_VAR
see k8s docs
See SO
First, enable necessary k8s services: dns
, ingress and
cert-manager.
As I am using microk8s
at the moment, this is as easy as
microk8s enable dns, ingress, cert-manager
on one of the k8s cluster nodes.
In folder /k8s
in this project there are 5 YAML files.
Apply these as:
- cluster-issuer.yaml
- microbot-ingress.yaml
- service.yaml
- deployment.yaml
- spikes-ingress.yaml
The real reason I had to pick Cassandra as a database is that the Astra offering is extremely cool for small projects like this. it's been free for me due to very limited traffic can storage size. But Cassandra is not relational. Not that I like relational databases. Not at all. But that type of database is convenient as there is a lot of experience, including my own, with rdbms's and orms. I also do not like orms. So, what's a good solution that can use C*? Event Sourcing :-).
That is the real reason for choosing this rather exotic setup. But after a while I really love Event Sourcing!! It's unbeatable for growing a backend service. It's unbeatable for separating concerns and as a basis for growth.
Only thing I haven't figured out yet is scalability: I don't see how I can combine the advantages of ES with clustering. Clustering is a great method to guarantee uptime, for instance when a Kubernetes pod with one of the nodes of the app cluster goes down unexpectedly. Rolling updates is possible with one node, so that's covered by k8s.
Note that CQRS is never mentioned in this document. It is considered to be one of the pillars of Event Sourcing, but I don't see what's so great about it. So, no segregation yet. Maybe when there is an actual need. Also, I find a Single Place of Truth (SPoT) essential: having State of an actor at one place and the Read-side of CQRS at another is begging for disaster as soon as the programmer (me) forgets about updating logic in both places. This is one of those instances where DRY is actually useful and good.
- Comments, mainly in a Router
- Tags (CRUD, tags cloud)
- External QA and Performance tests
Retrieve list of tags for the Spikes image:
curl -H "Authorization: Bearer ${GHCR_TOKEN}" https://ghcr.io/v2/jvorhauer/spikes/tags/list
Use sbt jibImageBuild
to create that image.
Use git checkout tags/vM.M.P
to check out a specific tag before calling jib to create an image of a specific release tag.