The Tarantool Operator provides automation that simplifies the administration of Tarantool Cartridge-based clusters on Kubernetes.
The Operator introduces new API version tarantool.io/v1alpha1
and installs
custom resources for objects of three custom types: Cluster, Role, and
ReplicasetTemplate.
- Resources
- Resource ownership
- Deploying the Tarantool operator on minikube
- Example: key-value storage
Cluster represents a single Tarantool Cartridge cluster.
Role represents a Tarantool Cartridge user role.
ReplicasetTemplate is a template for StatefulSets created as members of Role.
Resources managed by the Operator being deployed have the following resource ownership hierarchy:
Resource ownership directly affects how Kubernetes garbage collector works. If you execute a delete command on a parent resource, then all its dependants will be removed.
-
Install the required software:
-
Create a
minikube
cluster:minikube start --memory=4096
You will need 4Gb of RAM allocated to the
minikube
cluster to run examples.Ensure
minikube
is up and running:minikube status
In case of success you will see this output:
host: Running kubelet: Running apiserver: Running
-
Enable Ingress add-on:
minikube addons enable ingress
-
Create operator resources:
kubectl create -f deploy/service_account.yaml kubectl create -f deploy/role.yaml kubectl create -f deploy/role_binding.yaml
-
Create Tarantool Operator CRD's (Custom Resource Definitions):
kubectl create -f deploy/crds/tarantool_v1alpha1_cluster_crd.yaml kubectl create -f deploy/crds/tarantool_v1alpha1_role_crd.yaml kubectl create -f deploy/crds/tarantool_v1alpha1_replicasettemplate_crd.yaml
-
Start the operator:
kubectl create -f deploy/operator.yaml
Ensure the operator is up:
kubectl get pods --watch
Wait for
tarantool-operator-xxxxxx-xx
Pod's status to becomeRunning
.
examples/kv
contains a Tarantool-based distributed key-value storage.
Data are accessed via HTTP REST API.
We assume that commands are executed from the repository root and Tarantool Operator is up and running.
-
Create a cluster:
kubectl create -f examples/kv/deployment.yaml
Wait until all the cluster Pods are up (status becomes
Running
):kubectl get pods --watch
-
Ensure cluster became operational:
kubectl describe clusters.tarantool.io examples-kv-cluster
wait until Status.State is Ready:
... Status: State: Ready ...
-
Access the cluster web UI:
NOTE: Due to a recent bug in Ingress, web UI may be inaccessible. If needed, you can try this workaround.
-
Access the key-value API:
-
Store some value:
curl -XPOST http://MINIKUBE_IP/kv -d '{"key":"key_1", "value": "value_1"}'
In case of success you will see this output:
{"info":"Successfully created"}
-
Access stored values:
curl http://MINIKUBE_IP/kv_dump
In case of success you will see this output:
{"store":[{"key":"key_1","value":"value_1"}]}
-
-
Increase the number of replica sets in Storages Role:
kubectl edit roles.tarantool.io storage
This will open the resource in a text editor. Change
spec.replicas
field value to 3:spec: replicas: 3
Save your changes and exit the editor.
This will add new replica sets to the existing cluster.
View the new cluster topology via the cluster web UI.
-
Increase the number of replicas across all Storages Role replica sets:
kubectl edit replicasettemplates.tarantool.io storage-template
This will open the resource in a text editor. Change
spec.replicas
field value to 3:spec: replicas: 3
Save your changes and exit the editor.
This will add one more replica to each Storages Role replica set.
View the new cluster topology via the cluster web UI.
NOTE: When
kubectl
1.16 is out, you will be able to scale the application with a singlekubectl scale
command, for examplekubectl scale roles.tarantool.io storage --replicas=3
. With younger versions ofkubectl
this is impossible due to this bug.
make build
make start
./bootstrap.sh
make test