Coder Social home page Coder Social logo

cirocosta / monero-operator Goto Github PK

View Code? Open in Web Editor NEW
20.0 2.0 2.0 2.26 MB

A Kubernetes-native way of deploying Monero nodes and even whole networks: express your intention and let Kubernetes run it for you.

Home Page: https://www.getmonero.org/

License: Apache License 2.0

Makefile 1.57% Go 93.92% Dockerfile 3.50% Shell 1.01%
monero kubernetes monerod

monero-operator's Introduction

monero-operator

A Kubernetes-native way of deploying Monero nodes, networks, and miners: express your intention and let Kubernetes run it for you.



you: "Hi, I'd like two public nodes, and three miners please".

k8s: "Sure thing"

k8s: "It looks like you want two public nodes, but I see 0 running - let me create them for you."

k8s: "It looks like you want three miners, but I see 0 running - let me create them for you."

you: "Actually, I changed my mind - I don't want to mine on minexmr, I want cryptonode.social".

k8s: "Good choice, pool concentration sucks - let me update your miners for you :)"



See ./docs for detailed documentation about each resource.



Example

Full node

To run a single full node, all you need to do is create a single-replica MoneroNodeSet.

kind: MoneroNodeSet
apiVersion: utxo.com.br/v1alpha1
metadata:
  name: full-node
spec:
  replicas: 1

With a MoneroNodeSet you express the intention of having a set of monerod nodes running with a particular configuration: you express your intent, and monero-operator takes care of making it happen.

For instance, with kubectl-tree we can see that the operator took care of instantiating a StatefulSet

$ kubectl  tree moneronodeset.utxo.com.br full-node
NAMESPACE  NAME                                         READY
default    MoneroNodeSet/full-node                      True 
default    ├─Service/full-node                          -    
default    │ └─EndpointSlice/full-node-d2crv            -    
default    └─StatefulSet/full-node                      -    
default      ├─ControllerRevision/full-node-856644d54d  -    
default      └─Pod/full-node-0                          True 

with a pre-configured set of flags

$ kubectl get pod full-node-0 -ojsonpath={.spec.containers[*].command} | jq '.'
[
  "monerod",
  "--data-dir=/data",
  "--log-file=/dev/stdout",
  "--no-igd",
  "--no-zmq",
  "--non-interactive",
  "--p2p-bind-ip=0.0.0.0",
  "--p2p-bind-port=18080",
  "--rpc-restricted-bind-ip=0.0.0.0",
  "--rpc-restricted-bind-port=18089"
]

and a PersistentVolumeClaim attached with enough disk space for it.

$ kubectl get pvc
kubectl get pvc
NAME               STATUS   VOLUME                                     CAPACITY
data-full-node-0   Bound    pvc-1c60e835-d5f9-41c9-8509-b0e4b3b71f6b   200Gi    

With that all being declarative, updating our node is a matter of expressing our new intent by updating the MoneroNodeSet definition, and letting the operator take care of updating things behind the scene.

For instance, assuming we want to now make it public, accepting lots of peers, having a higher bandwidth limit and using dns blocklist and checkpointing, we'd patch the object with the following:

kind: MoneroNodeSet
apiVersion: utxo.com.br/v1alpha1
metadata:
  name: full-set
spec:
  replicas: 1

  monerod:
    args:
      - --public
      - --enable-dns-blocklist
      - --enforce-dns-checkpointing
      - --out-peers=1024
      - --in-peers=1024
      - --limit-rate=128000

Which would then lead to an update to the node (Kubernetes takes care of signalling the monerod, waiting for it to finish gracefully - did I mention that it has properly set readiness probes too? -, detaching the disk, etc etc).

Mining cluster

Similar to MoneroNodeSet, with a MoneroMiningNodeSet you express the intention of having a cluster o x replicas running, and then the operator takes care of making that happen.

For instance, to run a set of 5 miners spread across different Kubernetes nodes:

kind: MoneroMiningNodeSet
apiVersion: utxo.com.br/v1alpha1
metadata:
  name: miners
spec:
  replicas: 5
  hardAntiAffinity: true

  xmrig:
    args:
      - -o
      - cryptonote.social:5556
      - -u
      - 891B5keCnwXN14hA9FoAzGFtaWmcuLjTDT5aRTp65juBLkbNpEhLNfgcBn6aWdGuBqBnSThqMPsGRjWVQadCrhoAT6CnSL3.node-$(id)
      - --tls

ps.: $(id) is indeed a thing - wherever you put it, it's going to be interpolated with an integer that identifies the replica pps.: xmrig is used under the hood

Private network

With a MoneroNetwork you express the intention of having a network of inter-connected Monero nodes, taking care of not only bringing monerod up for you, but also providing the proper flags for each daemon so that they are exclusive nodes of themselves.

For instance, consider the following private regtest setup:

kind: MoneroNetwork
apiVersion: utxo.com.br/v1alpha1
metadata:
  name: regtest
spec:
  replicas: 3

  template:
    spec:
      monerod:
        args:
          - --regtest
          - --fixed-difficulty=1

Under the hood, the following tree of objects gets formed:

$ kubectl tree moneronetwork.utxo.com.br regtest

  NAME                                          
  MoneroNetwork/regtest                         
  ├─MoneroNodeSet/regtest-0                     
  │ ├─Service/regtest-0                         
  │ │ └─EndpointSlice/regtest-0-plf9m           
  │ └─StatefulSet/regtest-0                     
  │   ├─ControllerRevision/regtest-0-6dc6799f4b 
  │   └─Pod/regtest-0-0                         
  ├─MoneroNodeSet/regtest-1                     
  │ ├─Service/regtest-1                         
  │ │ └─EndpointSlice/regtest-1-7sd9z           
  │ └─StatefulSet/regtest-1                     
  │   ├─ControllerRevision/regtest-1-5b5c6b7b8d 
  │   └─Pod/regtest-1-0                         
  └─MoneroNodeSet/regtest-2                     
    ├─Service/regtest-2                         
    │ └─EndpointSlice/regtest-2-rhmd9           
    └─StatefulSet/regtest-2                     
      ├─ControllerRevision/regtest-2-7fdbcdb57b 
      └─Pod/regtest-2-0                         

with each node with the flags properly set so that they are interconnected:

$ kubectl get pods -ojsonpath={.items[*].spec.containers[*].command} | jq '.'
[
  "monerod",
  "--add-exclusive-node=regtest-1",
  "--add-exclusive-node=regtest-2",
  "--fixed-difficulty=1",
...
]
[
  "monerod",
  "--add-exclusive-node=regtest-0",
  "--add-exclusive-node=regtest-2",
  "--fixed-difficulty=1",
...
]
[
  "monerod",
  "--add-exclusive-node=regtest-0",
  "--add-exclusive-node=regtest-1",
  "--fixed-difficulty=1",
...
]

Install

  1. install
# submit the customresourcedefinition objects, deployment, role-base access
# control configs, etc.
#
kubectl apply -f ./config/release.yaml

License

See ./LICENSE

monero-operator's People

Contributors

cirocosta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

monero-operator's Issues

tor: circuit rotation?

it's currently very easy to do so for hidden service ingress (kill the proxy pod and let the replicaset reconciler create another one), but there might be the opportunity for making that configurable (if desirable), like:

tor:
  enabled: true
  rotateCircuits: 24h

which could then be either a pod self-destruction mechanism (easy peasy), or a sidecar that makes a request to the control port over loopback to order a new circuit establishment (not too hard as it's just a matter of leveraging their text-protocol based interface. could perhaps be a separate command for tornetes?)

tor support

Tor support

overview

Support for Tor is provided on two fronts:

  • facilitating the creation of credentials for hidden services through
    utxo.com.br/tor-labelled secrets
  • wiring the pods that run monerod instances with a Tor sidecar that acts as
    ingress and egress for Tor traffic, as well as applying the proper args for
    monerod.

Through the combination of both, one gets the ability of having a full monero
node, on any VPS or cloud provider, serving over both clearnet and tor with no
more than 5 lines of yaml:

kind: MoneroNodeSet
apiVersion: utxo.com.br/v1alpha1
metadata: {name: "my-nodes"}
spec:
  tor: {enabled: true}

yes, powerful.

secret reconciler

This reconciler, based on labels, is able to take action on those secrets that
would want to be populated with Tor credentials.

All you need to do is create a Secret with the annotation utxo.com.br/tor: v3
for a v3 service (v2 will be deprecated anyway, so why bother).

(ps.: if the secret is already popoulated, the reconciler WILL NOT try to
populate it again)

For instance, we can create a Secret named tor

apiVersion: v1
kind: Secret
metadata:
  name: tor
  labels:
    utxo.com.br/tor: v3

which after reconciliation will see its data filled with the content of the
files you'd expect to see under HiddenServiceDir:

apiVersion: v1
kind: Secret
metadata:
  name: tor-creds
  annotations:
    utxo.com.br/tor: v3
data:
  hs_ed25519_secret_key: ...
  hs_ed25519_public_key: ...
  hostname: blasblashskasjjha.onion

(you can see if things went good/bad through events emitted by the
reconciler)

With those filled, we're then able to make use of them in the form of a volume
mount in a Tor sidecar which then directs traffic to the main container's port
through loopback - after all, they're in the same network namespace.

A full example of a highly-available hidden service:

---
#
# create an empty but annotated secret that will get populated with the hidden
# service credentials.
#
apiVersion: v1
kind: Secret
metadata:
  name: tor
  annotations:
    utxo.com.br/tor: "v3"

---
#
# fill a ConfigMap with the `torrc` to be loaded by the tor sidecar.
#
apiVersion: v1
kind: ConfigMap
metadata:
  name: tor
data:
  torrc: |-
    HiddenServiceDir /tor-creds
    HiddenServicePort 80 127.0.0.1:80
    HiddenServiceVersion 3

---
#
# the deployment of our application with the application container, as well as
# a sidecar that carries the tor proxy, exposing our app to the tor network.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo
  labels: {app: foo}
spec:
  selector:
    matchLabels: {app: foo}
  template:
    metadata:
      labels: {app: foo}
    spec:
      volumes:
        - name: tor-creds
          secret: {secretName: tor-creds}
        - name: tor-creds
          configMap: {name: torrc}

      containers:
        - image: utxobr/example
          name: my-main-container
          env:
            - name: onion_addr
              valueFrom:
                secretKeyRef:
                  name: tor
                  key: hostname

        - image: utxobr/tor
          name: tor-sidecar
          volumeMounts:
            - name: tor-creds
              mountPath: /tor-creds
            - name: torrc
              mountPath: /torrc

ps.: notice that there's no need for Service - that's because we don't need
an external ip or any form of public port; this is a hidden service :)

an interesting side note here is that we not only are able to expose our
service in the Tor network, but we also have access to it via socks5 by
making requests to the sidecar under 127.0.0.1:9050 (again, same network
namespace!)

ps.: note the use of the ONION_ADDRESS environment variable - that's in
order to force redeployments to occur whenever there's a change to the secret - see https://ops.tips/notes/kuberntes-secrets/

tor-enabled monero nodes

As MoneroNodeSets create plain core Kubernetes resources in order to drive
the execution of monerod, we can do the same for enabling Tor support.

Just like with non-Tor nodes, we want to still be able to create notes with
nothing more than a request for monero nodes:

kind: MoneroNodeSet
apiVersion: utxo.com.br/v1alpha1
metadata: {name: "my-nodes"}
spec: 
  replicas: 1

As Tor support should be just as simple as clearnet, making it Tor-enabled
takes a single line:

 kind: MoneroNodeSet
 apiVersion: utxo.com.br/v1alpha1
 metadata: {name: "my-nodes"}
 spec: 
   replicas: 1
+  tor: {enabled: true}

Under the hood, all that we do then is create one extra primitive, a
utxo.com.br/tor labelled Secret, which we then mount into a Tor sidecar
container that using the credentials, is then able to proxy traffic from the
Tor network into monerod via loopback, as well as serve as a socks5 proxy for
outgoing connections (through loopback as well).


        StatefulSet

                ControllerRevision

                        Pod
                                container monerod
                                        -> mounts volume for data
                                        -> points args properly at sidecar

                                containerd torsidecar
                                        -> mounts volume for torrc configmap 
                                        -> mounts volume for hidden svc secrets
                                                -> proxies tor->monerod
                                                -> proxies monerod->tor

moneronodeset: hugepages

running a setup where hugetables is enabled but we don't provision to the pod, monerod failes with a SIGBUS when starting up.

it'd be nice to get the coredump and do some investigation to see if there's a nice way of fixing it, but in the mean time,

      resources:
        requests:
          memory: 1Gi
        limits:
          memory: 1Gi
          hugepages-2Mi: 1Gi

works great

cmd: torgen

the secrets reconciler does a very good job of filling the secret, but it might be useful for folks to be able to do a quick :r!monero-operator generate-tor-secret on vim 😅

o11y

Screen Shot 2021-05-22 at 7 29 04 AM

so,

  1. wire prometheus to the endpoints provided by the internal headless-service that's used for the sts
  2. write grafana to promethes
  3. write the monerod-sidecar

cmd: dry-run

it'd be nice for those that don't want to actually run the reconciler to be able to just generate from the cli all the objects they care about (which, after all, are fine to run on any k8s without the need of any extra crd)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.