- Using an AWS managed EKS cluster for deploying the kubernetes configuration. (kube folder)
- It's a single node cluster using t3.a.small instance type with 2cpus and 2gb memory.
- Running two pod replicas for redundancy.
- Running an ELB service in front of the pods with a public subnet.
- Backend application is completely stateless.
- Using AWS managed Redis cluster for all database and stream needs.
- Everything above is in a single VPC where everything is in a private subnet except ELB.
- Using github action for CI/CD needs
Any browser with websocket is supported. If websockets are not working in your browser for any reason, the app will not work.
To make it easier for testing, everytime you refresh your browser, you will be randomly assigned a new user. So if you switch between tabs or even refresh your current tab, you will be a new user
- Make sure you have following things already in the system
- node version
lts/fermium
- redis
6.x
- node version
- Run
yarn install
in backend and frontend folders - Start redis with default parameters
redis-server
- Run
yarn start:dev
in backend folder. - Create an
.env
file in frontend folder with valueREACT_APP_BACKEND_ENDPOINT=localhost:5000
change value to whatever host and port backend is started on. - Run
yarn start
in frontend folder. - You should see your application at
localhost:3000
Note: If you do not start redis with default parameters, you can create
a .env
file in backend folder with keys REDIS_HOST
and REDIS_PORT
You definitely don't need the below if you just want to develop. It's only for those rare cases when you wanna see the whole architecture as it is.
- Make sure you have the following things already in the system along
with everything in
How to develop locally
_ minikube _ kubectl _ docker _ Your local machine IP Address
- If you do not know your local IP Address, you can check it by starting frontend
and you'll see it in the terminal along with
localhost
- Start minikube
minikube start
- Start redis with command
redis-server --protected-mode no
- Building your local docker image
- Make sure you are in root directory of the repo
- Run
eval $(minikube docker-env)
- Run
docker build -t clover .
- Start kubernetes cluster along with pods and services
- Run
kubectl apply -f kube/clover.development.yaml
- Run
- You can check the status of your pods and service by
kubectl get pods
kubectl get svc clover
- Now, to access our cluster, we have to tunnel our network
minikube service clover --url
- Access the link and hurray.
- To stop and clean up
kubectl delete -f kube/clover.development.yaml
minikube stop
Because currently it's running on a pretty small instance which already contains 6 pods (4 system pods) + (2 application pods) it's not exactly a big node to have great performance. I have balanced the parameters to receive and push updates such that all updates are propogated within ~1sec. We can increase our performance by atleast allocating 1 cpu/pod and having a bigger redis cluster.