Skip to content

Tarantool Operator manages Tarantool Cartridge clusters atop Kubernetes

License

Notifications You must be signed in to change notification settings

picodata/tarantool-operator

 
 

Repository files navigation

Tarantool Kubernetes operator

Test

The Tarantool Operator provides automation that simplifies the administration of Tarantool Cartridge-based cluster on Kubernetes.

The Operator introduces new API version tarantool.io/v1alpha1 and installs custom resources for objects of three custom types: Cluster, Role, and ReplicasetTemplate.

Table of contents

Resources

Cluster represents a single Tarantool Cartridge cluster.

Role represents a Tarantool Cartridge user role.

ReplicasetTemplate is a template for StatefulSets created as members of Role.

Resource ownership

Resources managed by the Operator being deployed have the following resource ownership hierarchy:

Resource ownership

Resource ownership directly affects how Kubernetes garbage collector works. If you execute a delete command on a parent resource, then all its dependants will be removed.

Documentation

The documentation is on the Tarantool official website.

Deploying the Tarantool operator on minikube

  1. Install the required deployment utilities:

    Pick one of these to run a local kubernetes cluster

    To install and configure a local minikube installation:

    Create a minikube cluster:

    $ minikube start --memory=4096

    You will need 4Gb of RAM allocated to the minikube cluster to run examples.

    Ensure minikube is up and running:

    $ minikube status
    ---
    minikube
    type: Control Plane
    host: Running
    kubelet: Running
    apiserver: Running
    kubeconfig: Configured
  2. Build the operator image

    $ make docker-build

    By default, the image is tagged as tarantool-operator:<VERSION>

  3. Add image to local minikube registry

    $ make push-to-minikube
    ---
    minikube image load tarantool-operator:0.0.9

NOTE: If you want to use the official docker image of the Tarantool operator use the helm charts from the tarantool helm repository. Read more about this in the documentation.

  1. Install the operator

    $ helm install -n tarantool-operator operator helm-charts/tarantool-operator \
                 --create-namespace \
                 --set image.repository=tarantool-operator \
                 --set image.tag=0.0.9
    ---
    NAME: operator
    LAST DEPLOYED: Wed Dec 15 22:54:13 2021
    NAMESPACE: tarantool-operator
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None

    Or you can use make:

    $ make helm-install-operator

    Ensure the operator is up:

    $ kubectl get pods -n tarantool-operator
    ---
    NAME                                  READY   STATUS    RESTARTS   AGE
    controller-manager-778db958cf-bhw6z   1/1     Running   0          77s

    Wait for controller-manager-xxxxxx-xx Pod's status to become Running.

Example Application: key-value storage

examples/kv contains a Tarantool-based distributed key-value storage. Data are accessed via HTTP REST API.

Application topology

App topology

Running the application

We assume that commands are executed from the repository root and Tarantool Operator is up and running.

  1. Create a cluster:

    $ helm install -n tarantool-app cartridge-app helm-charts/tarantool-cartridge \
                 --create-namespace \
                 --set LuaMemoryReserveMB=256
    ---
    NAME: cartridge-app
    LAST DEPLOYED: Wed Dec 15 23:50:09 2021
    NAMESPACE: tarantool-app
    STATUS: deployed
    REVISION: 1

    Or you can use make:

    $ make helm-install-cartridge-app

    Wait until all the cluster Pods are up (status becomes Running):

    $ kubectl -n tarantool-app get pods
    ---
    NAME          READY   STATUS    RESTARTS   AGE
    routers-0-0   1/1     Running   0          6m12s
    storage-0-0   1/1     Running   0          6m12s
    storage-0-1   1/1     Running   0          6m12s
  2. Ensure cluster became operational:

    $ kubectl -n tarantool-app describe clusters.tarantool.io/tarantool-cluster

    wait until Status.State is Ready:

    ...
    Status:
      State:  Ready
    ...
  3. Access the cluster web UI:

    $ kubectl -n tarantool-app port-forward routers-0-0 8081:8081
    ---
    Forwarding from 127.0.0.1:8081 -> 8081
    Forwarding from [::1]:8081 -> 8081
    Handling connection for 8081
  4. Access the key-value API:

    1. Store some value:

      $ curl -XPOST http://localhost:8081/kv -d '{"key":"key_1", "value": "value_1"}'
      ---
      {"info":"Successfully created"}
    2. Access stored value:

      $ curl http://localhost:8081/kv/key_1
      ---
      "value_1"
    3. Update stored value:

      $ curl -XPUT http://localhost:8081/kv/key_1 -d '"new_value_1"'
      ---
      ["key_1", "new_value_1"]
    4. Delete stored value:

      $ curl -XDELETE http://localhost:8081/kv/key_1
      ---
      {"info":"Successfully deleted"}

Scaling the application

Increase the number of replica sets in Storages Role:

In the cartridge helm chart, edit the helm-charts/tarantool-cartridge/values.yaml file to be

- RoleName: storage
  ReplicaCount: 2
  ReplicaSetCount: 2

Then run:

$ helm upgrade -n tarantool-app cartridge-app helm-charts/tarantool-cartridge \
           --set LuaMemoryReserveMB=256

This will add another storage role replica set to the existing cluster. View the new cluster topology via the cluster web UI.

Read more about cluster management in the documentation.

Development

Use make help to describe all targets.

Below are some of them.

Regenerate the Custom Resource Definitions

$ make manifests

Building tarantool-operator docker image

$ make docker-build

Running tests

$ make test

About

Tarantool Operator manages Tarantool Cartridge clusters atop Kubernetes

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 82.8%
  • Makefile 10.0%
  • Lua 3.3%
  • Mustache 2.7%
  • Other 1.2%