Coder Social home page Coder Social logo

nimbus's Introduction

Nimbus

Self-hosted services in the Cloud.

Introduction

Nimbus centralises Infrastructure (eg. Terraform deployments, Docker Containers) that deploys self-hosted services on Cloud Platforms in one repository.

Features

  • Infrastructure as Code (IaC) Expressing IaC makes infrastructure dynamic & malleable to changes. Dependencies between Multiple Cloud providers can be expressed explicitly in code. Checking IaC into Git provides checkpoints for rollbacks if something goes wrong.
  • Multi Cloud Consolidates deployments on multiple Cloud Platforms (GCP, Cloudflare & Blackblaze) in one place.

Architecture

flowchart LR
    tls[Let's Encrypt TLS]
    b2[Blackblaze B2 Object Storage]

    subgraph cf[Cloudflare]
        direction TB
        DNS
        CDN
    end

    cf[Cloudflare] <--> gcp

    subgraph gcp[Google Cloud Platform]
        direction LR
        subgraph gce[Compute Engine]
            dev-env[WARP Dev Environment]
        end
    end

Services

User-facing services hosted on Nimbus:

  • WARP: portable development environment based on Cloud VM

License

MIT.

nimbus's People

Contributors

dependabot[bot] avatar mrzzy avatar mrzzy-bot avatar renovate[bot] avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

Forkers

xxiamnoonexx

nimbus's Issues

Migrate: Linode / Akamai deployments to GCP

Motivation

Linode / Akamai is hiking their prices by 20% on 1st Apr.

  • Linode is no longer competitively priced against the Big 3 cloud providers (GCP, AWS, Azure):
    • Last Linode bill (SGD w GST): $51.81(LKE, Dedicated 2vCPU, 4GB, 1000 GB egress, Load Balancer), with 20% hike estimate: $62.17
    • GCP Pricing calculator estimate (SGD w GST): $87.19 (GKE, n1-standard-2 2vCPU, 7.5GB RAM, 100GB egress)
      • Extra RAM will help to future proof workloads. Metrics on Grafana currently puts our memory utilization at 91.5%.
  • Linode's (relative) bare-bones cloud platform introduces operational hurdles vs the Big 3 providers with better ecosystem of Managed services & Tooling:
    • Managed services eg. Cloud Storage on Linode is subscription based, so an external service, Blackblaze was brought in for pay as you go cloud storage. This would not have been a problem with the Big 3 cloud providers as they all have integrated Cloud Storage: AWS S3, GCP GCS & Azure Storage.
    • Tooling eg. Infracost currently supports AWS, Azure & GCP but not Linode.
  • Spreading workloads multi-cloud incurs unnecessary egress & latency costs when they communicate.
  • Its hard to support the complexity of multi-cloud architecture as the sole developer.

Proposal

Reverse course on #51.

Migrate Linode / Akamai deployments to GCP to integrate Nimbus deployments behind a single cloud:

  • LKE to GKE
  • Load Balancer to MetalLB.

Checklist

  • Back K8s Volumes into GCS:
    • library-calibre-web-settings
    • media-jellyfin-config
  • Port LKE Terraform module to GKE
  • Deploy MetalLB on GKE MetalLB not supported on GKE.
  • Restore K8s Volumes on GKE
  • Port & Deploy K8s manifests on GKE
  • Update DNS
  • Update CI workflows.
  • Update README (diagram etc.)
  • Functional Testing on GKE
  • Teardown Linode deployments.

k8s: Deploy TiddlyWIki on Linode LKE

Motivation

Deploy TiddlyWiki to host personal notetaking.

Requirements

Deploy TiddyWiki on NodeJS:

  • with plugins
    • code mirror plugin
    • markdown plugin
    • formula plugin
    • katex plugin
    • relink plugin
    • tw5-extendedit plugin
    • edit-autolist
    • notebook theme
    • TWCrosslinks
    • Context Plugin
  • Secure TiddlyWiki with OAuth Authentication
  • Make TiddlyWiki accessible from anywhere over TLS
  • Automated backups to archive TiddyWiki to Wasabi cloud storage.

Proposal

Architecture

Tiddlywiki Deployment Architecture

OAuth Authentication

Authentication flow:

  • user makes request to oauth2-proxy.
  • oauth2-proxy performs OAuth authorization grant with Keycloak.
  • user is redirected to Keycloak sign in to sign in.
  • oauth2-proxy sets session cookie.
  • oauth2-proxy starts proxying requests to Tiddywiki.

Backup Workflow

Backup flow:

  • Tiddlywiki handles user requests and writes tiddlers to disk.
  • Sftp server exposes tiddlers via SFTP.
  • Backup pipeline (Argo Workflow) downloads tiddlers & uploads them to Wasabi S3.
    • Use rclone to copy tiddler files from sftp to Wasabi S3.

Todo

  • Compile TiddyWiki docker image with TiddyWiki & Plugins properly configured
  • Write k8s manifests (ie Deployment) for deploying TiddyWiki on Linode K8s.
  • Deploy keycloak as OAuth2 provider via keycloak-operator
  • Deploy oauth2-proxy sidecar to add OAuth2 auth to TiddlyWiki
  • Create wiki.mrzzy.co DNS CNAME and point it at Ingress IP
  • Expose TiddlyWiki via Ingress with LetsEncrypt TLS.
  • Deploy Argo Workflow Engine
  • Expose Tiddler files on written by TIddlywiki instance via SFTP.
  • Write Argo Workflow to backup TiddlyWiki Wasabi S3

Migrate from GCP to Linode Cloud

Motivation

Linode provides cheaper infrastructure costs as compared to Google Cloud. Since this is a hobbyist side-project:

  • we should optimize infrastructure costs where possible.
  • there is no need to pay extra for tougher SLAs.

While moving to Linode from GCP would mean that we trade off:

  • Per minute billing: Linode instances are billed by the hour.
  • Lots of managed services: While managed services allow us delegate some production-izing complexity to Google Cloud when building applications, this does not apply to our hobbyist use case.

Proposal

Migrate Cloud Infrastructure from GCP to Linode Cloud:

  • migrate Cloud DNS & TLS ACME to Linode Managed DNS

k8s: Clean up K8s Kustomize Deployments

Problem

  • Version matching for patch targets is brittle as it will break when the apiVersion for the resources changes.
  • apiVersion and kind: Kustomization not present in Kustomizations. Not pegging to a version for Kustomization resource might cause it to unexpectedly break in the future if the version of Kustomization that we depend on is dropped.

Proposal

  • Remove version matching in patches in Kustomizations.
  • Add apiVersion and kind: Kustomization to all under k8s/kustomize.

terraform: Setup Bastion VPN to provide access to internal LKE services

Motivation

  • wireguard on k8s is too much of hassle
  • does not really work, cant access k8s services for some reason.
  • kubectl port-forward is not unstable and unsuitable for accessing internal services.
  • exposing internal services over the public internet is insecure.

Proposal

Bastion Host DIagram

  • create a tiny instance to host wireguard server as a bastion server with ip forwarding capabilities,
  • expose services via NodePorts using a firewall to
    • ensure that unwanted connections from the internet are blocked.
    • allows connection to services from bastion server.
  • deploy a ingress-nginx to route internal HTTP services only accessible from bastion server.
  • create DNS routes to forward requests to service ips to be tunneled via the bastion server.

Steps

  • Write terraform deployment to create a tiny instance for wireguard.
  • Block any external internet traffic not meant for public access (ie all except shadowsocks) to LKE worker nodes with a Linode Firewall
  • Deploy a internal ingress-nginx with a custom ingress class and NodePort service.
  • Expose HTTP services with Ingresses targeting the internal ingress-nginx's ingress class.
  • Create DNS routes for HTTP services.
  • Add IPTables rules routing traffic from port 80/443 to internal ingress-nginx's NodePorts to remove the need to suffix every URL with nodeport.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

dockerfile
docker/naiveproxy/Dockerfile
  • caddy 2.8.4
github-actions
.github/workflows/apply-terraform.yaml
  • actions/checkout v3
  • hashicorp/setup-terraform v2
  • ubuntu 20.04
.github/workflows/cleanup-terraform.yaml
  • actions/checkout v3
  • hashicorp/setup-terraform v2
  • actions/checkout v3
  • google-github-actions/auth v2
  • google-github-actions/setup-gcloud v2
  • ubuntu 20.04
  • ubuntu 20.04
.github/workflows/docker.yaml
  • actions/checkout v3
  • docker/login-action v2.2.0
  • docker/metadata-action v4.6.0
  • docker/build-push-action v4.2.1
  • ubuntu 22.04
.github/workflows/lint-secrets.yaml
  • actions/checkout v3
  • DariuszPorowski/github-action-gitleaks v2
  • ubuntu 20.04
.github/workflows/lint-terraform.yaml
  • actions/checkout v3
  • hashicorp/setup-terraform v2
  • actions/checkout v3
  • hashicorp/setup-terraform v2
  • ubuntu 20.04
  • ubuntu 20.04
pip_requirements
requirements.txt
  • pre-commit ==3.7.1
regex
.github/workflows/apply-terraform.yaml
  • hashicorp/terraform 1.8.5
.github/workflows/cleanup-terraform.yaml
  • hashicorp/terraform 1.8.5
.github/workflows/lint-terraform.yaml
  • hashicorp/terraform 1.8.5
  • hashicorp/terraform 1.8.5
terraform
terraform/aws.tf
terraform/azure.tf
terraform/cloudflare.tf
terraform/gcp.tf
  • github.com/mrzzy/warp 1bd4e73719565dfbcbfc7cccd1603e6bc9304edf
terraform/main.tf
  • acme 2.23.2
  • aws 4.67.0
  • azuread 2.52.0
  • azurerm 3.109.0
  • b2 0.8.12
  • cloudflare 4.35.0
  • google 4.85.0
  • hashicorp/terraform >=1.3.0
terraform/modules/cloudflare/dns/main.tf
  • cloudflare 4.35.0
terraform/modules/gcp/iam/main.tf
  • google >= 4.22.0
terraform/modules/gcp/vpc/main.tf
  • google >= 4.22.0
terraform/modules/tls_acme/main.tf
  • acme < 2.23.3
  • tls < 4.0.6

  • Check this box to trigger a request for Renovate to run again on this repository

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: renovate.json
Error type: The renovate configuration file contains some invalid settings
Message: Regex Managers must contain currentValueTemplate configuration or regex group named currentValue, Regex Managers must contain currentValueTemplate configuration or regex group named currentValue

k8s: Deploy MariaDB RDBMS using mysql helm chart

Motivation

  • MySQL compatible (MariaDB) RDBMS to use to Learn SQL.

Requirements

  • DB credentials injected via Sealed Secret,
  • Persistent Volume storage.

Out of Scope:

  • clustering / replication.
  • extended monitoring with mysql-exporter.

Proposal

Deploy MariaDB with Bitnami Helm Chart

Checklist

  • Create Sealed Secret with keys: mariadb-root-password, mariadb-replication-password, mariadb-password
  • Deploy MariaDB with Bitnami Helm Chart

Setup / Teardown Infra for WARP Devbox

Problem

Manually setting up and tearing down WARP Box is non-reproducible & time consuming.
Just leaving the box running 24/7 will incur significant cloud provider billing.

Proposal

Write a Terraform module to automate a reproducible deployment of the WARP Box on GCP.

Requirements

Allow HTTP Access to WARP VM by IP Range

Motivation

See mrzzy/warp#21

Proposal

Add supporting infrastructure for WARP VM's HTTP terminal:

  • Add Deploy Terraform workflow inputs
    • Add checkbox to enable HTTP terminal for WARP vm.
    • Add text field to allow user to configure CIDR range to allow traffic to WARP vm.
  • Update Terraform deploy to support the workflow inputs:
    • Allow HTTP traffic to WARP vm if checkbox is enabled.
    • Set CIDR range based on user setting in text field.

Refactor:

repo: Use secrets.baseline to manually configure secrets detection

Motivation

Context

Currently we use detect-secrets & pre-commit provide a pre commit secrets check:

  • Ensuring that we do not commit & leak secrets into this public repo.

detect-secrets falsely falsely flags SealedSecrets as secrets:

  • SealedSecrets are encrypted and should be safe to store in public repo.

As a workaround, currently detect-secrets is configured by pre-commit to ignore all K8s manifests with -sealed.yaml suffix:

- '--exclude-files'
# prevent false positive on sealed secrets
- '.*sealed.yaml'

Problem

The workaround is brittle (relies on filename suffix). Additionally, ignoring secrets detection should be done mindfully require an explicit step.

Requirements

  • Don't rely on brittle filename suffixes to disable secrets detection.
  • Ignoring secrets detection should be done mindfully require an explicit step.

Proposal

Use secrets.baseline to manually configure secrets detection:

  • Files to be excluded to be explicitly added to secrets.baseline.

Migrate kube-prometheus deploy from kustomize to be based on jsonnet

Motivation

Currently, we deploy kube-prometheus-stack via a kustomization that basically boils down to a list of coded resource URLs.

Switch to jsonnet based deployment:

  • remove to need to deal with both kustomize & jsonnet for same kube-promtheus-stack deployment.
  • allows us to persistently change Grafana default admin credentials from the default.
  • facilitate the adding of new alerts and dashboards, especially since tooling are also in jsonnet (ie grafanet for Grafana Dashboards).

Things To Do

Self hosted Streaming Service

Motivation

Improve the quality of video content consumed.

  • expand range of content consumed from just social media (eg. Youtube, Reddit videos).
  • make progress watching critically acclaimed films by making them easily accessible

Proposal

UX

User flow of the streaming service:

  1. User decides to watch a certain Film / TV show.
  2. User navigates to media.mrzzy.co/files to queue the Film / TV show for download.
  3. User opens the a Jellyfin client (eg. web client at media.mrzzy.co) to stream the Film / TV show.

At steps 2. & 3. there should be some SSO login challenge to reject unauthorized users.

System Design

Kubernetes is introduced into the system design to mitigate the complexity of orchestrating multiple services.

flowchart LR
  client([Jellyfin Client]) <--> ingress-nginx
  subgraph K8s
       direction LR
       ingress-nginx <-->|auth.mrzzy.co| oauth2-proxy
       ingress-nginx <-->|media.mrzzy.co| jellyfin[Jellyfin server]
       ingress-nginx <-->|media.mrzzy.co/files| flood[FloodJS] <--> rtorrent
  end
  rtorrent -->|mount| bucket
  bucket[(Storage Bucket)] -->|mount| jellyfin

Deploy a self-hosted Streaming Service:

  • 1. SSO Login: ingress-nginx checks for credentials & redirects the user to oauth2-proxy to authenticate.
  • 2. Sourcing Media: rtorrent with FloodJS as its UI will procure the media files we need for streaming & store it inside a storage bucket.
  • 3. Storing Media a Storage bucket will persist the media files.
  • 4. Serving Media: Jellyfin transcode & stream the media files stored in the Storage bucket to the client.

Future Work

Out of scope future work:

  • Integrating more identity providers (eg. Google, Github)with dex.
  • scale to zero with Knative.
  • monitoring with Prometheus & Grafana.

Proxy WARP VM's Web Terminal via Nginx

Motivation

On a corporate network, HTTPS connections to made to WARP VM's Web Terminal is immediately met with a TCP reset.

  • Presumably this due to the firewall denying any TLS handshakes from being made to servers offing TLS certificates with only Domain Validation.

Approaches

Considered approaches:

  • Revert to HTTP Revert to using insecure HTTP to avoid blocking by TLS certificate validation. This didn't work as without the obfuscation provided by HTTPS, the firewall will able to perform Deep Packet Inspection (DPI) and block websocket connections required by TTYD to operate.
  • Replace TTYD Deploy Apache Guacamole to replace the TTYD web terminal embedded in WARP VM. Guacamole features a websocket-free web terminal implementation, avoiding the firewall blocking TTYD over HTTP suffers from as detailed earlier. However Guacamole is a distributed system, which introduces to much complexity just advert blocking on a specific corporate network.
  • Employ Cloudflare to proxy requests to WARP VM:
    • Cloudflare's TLS certificate is also trusted by the corporate network. After all, Cloudflare widely used so blocking it will incur significant collateral damage. Cloudflare's Universal SSL certificates are not trusted by the corporate network.
    • Cloudflare is free, alleviating any hosting cost.

Proposal

Nginx Reverse Proxy Deploy Nginx on Google App Engine as WARP VM Proxy. This option requires significant engineering effort to implement.

  • Google App Engine (GAE) provides a wildcard TLS *.appspot.com certificates for services hosted on it. This certificate is trusted by the corporate network. Maintaining HTTPS will prevent DPI that the firewall relies on to block websockets needed by TTYD to work.
  • Nginx performs a reverse proxy function by proxying requests to WARP VM.

Implementation

  • Craft nginx.conf config file to reverse proxy requests to TTYD web terminal
  • Deploy Nginx on GAE with nginx.conf

Fix CD not properly applying ArgoCD CRDs

Motivation

In this CD step, we can see that kubectl only acknowledges applying one Argo CRD:

appproject.argoproj.io/nimbus unchanged

Problem

Since the directory structure of k8s/argocd changed in 306343b,
kubectl has only applied the manifests in the top level directory only, not recursing to sub-directories.

Proposal

Add the --recursive flag to kubectl apply in the CD when applying manifests to do so recursively.

Deploy Web Proxy on Google App Engine

Motivation

A self hosted web proxy will be useful to bypass overly restrictive network restrictions (both stackoverflow.com & github.com) are blocked.

Problem

We can't just deploy a Web Proxy on K8s and call it a day:

  • Internal browsers does not trust TLS certificates issued by LetsEncrypt or Domain Validation Certs issued by Sectigo.
  • Traffic over HTTP is heavily filtered (eg. Websockets is blocked over HTTP, but not HTTPS).

Proposal

Piggyback on *.appspot.com wildcard TLS certificate available for apps deployed on Google App Engine.

Deploy self-hosted Web Proxy on Google App Engine:

  • Build Swiperproxy container configured for running on Google App Engine
    • listen for HTTP requests on port 8080.
    • output access / error logs to /var/log/app_engine/app.log where it is access by Google App Engine.
    • inject robot.txt to discourage web crawlers.
  • Create Terraform Module to deploy Swiperproxy container on Google App Engine
  • Add CNAME DNS Alias proxy.mrzzy.co to Proxy with Google App Engine provided URL: https://SERVICE_ID-dot-PROJECT_ID.REGION_ID.r.appspot.com

Expose WARP VM with DNS & TLS

Motivation

Post mrzzy/warp#17, WARP VM is able to expose a web accessible terminal via ttyd.

However its currently locked down by VPC firewall due to security concerns (unencrypted HTTP)

  • The only way to access it as of the time of writing is to use an SSH tunnel via remote port forwarding.

Support the change of apporach on the WARP VM side.

Proposal

Allow secure HTTPS web browser only access to WARP VM:

  • Create vm.warp.mrzzy.co DNS route via GCP Cloud DNS with WARP VM's IP address.
  • Use the ACME Terraform Provider to obtain a Lets Encrypt TLS certificate for WARP VM
    • use the dns-01 ACME challenge type with GCP Cloud DNS.
  • Write TLS certificate to the path WARP VM's expects (see: mrzzy/warp#19)
  • Expose HTTPS port for secure TTYD web terminal access

Allow WARP VM GCE machine type be configured on Deploy

Motivation

Different development tasks on requires a varying amount of system resources, no one size fits all machine type.

Proposal

Allow WARP VM GCE machine type be configured on Deploy:

  • Add terraform variable to deploy with specified machine type.
  • Add machine type dropdown on Deploy CI dispatch inputs.
  • Pass machine type to terraform variable.

Clean up Old Infra in Nimbus

Problem

Objective: Clean up old infra in Nimbus to pave the way for WARP's infra.

Requirements

  • Delete k8s, ArgoCI, jsonnet, kustomize, infra.
  • Delete Linode related infra.

k8s: Refactor secrets delivery for deployments

Motivation

Currently, we embed sealed secrets in nimbus repository using the following workflow:

  1. Create actual secret in the filesystem.
  2. Add target to project makefile to build the sealed secret using kubectl seal, and commiting that.
  3. Use a separate CI to deploy the sealed secret with kustomize.
  4. Sealed secret controller unsealed and creates Secret resources that deployments use.

Pain points with this approach:

  1. Step 1-2 is not really compatible with Helm Charts, painful to inject generated sealed secrets into helm. Blocks #16
    • In theory we could template sealed secrets into the extraDeploys many helm charts have, but that requires needs alot more tooling.
  2. secrets are stored on file, not commited. This means distributed development on this repository is not currently possible.

Requirements

Must Haves:

  • Solve pain point 1. First class workflows with Helm charts.
  • Encryption at rest for secrets.

Good to Haves:

  • Single source of truth: Make secrets rotation easier, especially with scale.
  • Automated secret rotation.

Proposal

Workaround (PR #18)

Sealed secrets without generation from a single source of truth generation;

  • Temporary workaround that resolves that issue and unblocks #16
  • Tradeoff: Sacrifice single source of truth, secrets must duplicated across deployments.

Long Term Solution

Github Secrets + Sealed Secrets + CD PIpeline:

  • GitHub Secrets acts as single source of truth.
  • CD pipeline with templates secrets and encrypts them with kubeseal (encrypt at rest).
  • Sealed secrets are unsealed with sealed secrets controller to produce secret.
  • Point helm charts at unsealed secret.

Considered Solutions

Hashicorp Vault + Vault Agent sidecar:

  • Hashicorp vault provides single source of truth.
  • Vault Agent interfaces with Hashicorp vault, abstracts away with dealing with vault directly.
  • Dynamic secrets automates secret rotation.
  • ๐Ÿ‘Ž High learning curve.
  • ๐Ÿ‘Ž Huge operational burden: need to deploy vault, unsealing vaults, automatic unseal is available but also burden to setup.

Run Rtorrent & Flood in Separate Containers

Motivation

At the time of writing, rtorent & flood are deployment together in the same container. Flood is responsible for managing the rtorrent instance.

This deployment mode has the following issues:

  • Single of Point of Failure: If flood crashes for whatever reason, it will take the rtorrent instance down with it, together with any downloads in progress (as the in-progress downloads are not persisted). For long running downloads, this provides a bad UX as downloads seeming disappear when k8s spins up a new container.
  • Lack of Observability Only Flood's logs are output to stdout with Rtorrent's logs are written to a log file. Debugging rtorrent usually mean having to kubectl exec the container and inspecting the log manually.

Proposal

Switch to deploying flood in separate containers.See example in Github Discussion.

Checklist

  • Refactor rtorrent.rc to work running standalone jesec/rtorrent container.
  • Expose rtorrent via rtorrent.rc config & K8s Service.
  • Refactor rtorrent-flood Deployment with pod with 2 containers, communicating via socket via shared volume (eg. cache).

Switch Proxy-gae to Host based routing

Problem

Fix Accessing Webpages hosted on WARP Box with custom routes

  • This is useful for web development & accessing web-based UI for tools like Spark, Prefect, Airflow & Jupyter.
  • #149 attempts to make local development services accessible from WARP Box by allowing the user to specify custom path based routes.
  • However, most web-based UIs do not support being hosted behind reverse proxies and/or under a custom path route in their URLs. (eg. Spark UI cannot access its static assets, Jupyter Lab will redirect to a path outside of its path route & fail to result).

Proposal

Switch to host based routing instead of the, current, path based routing:

  • Rewrite proxy GAE to Routewarp.mrzzy.co/spark as spark.warp.mrzzy.co instead.
  • Create DNS routes for the custom routes on. (as is done currently for WARP VM itself) & point them at Proxy GAE.

Checklist

  • Switch template.py & nginx.conf.jinja2 to host based routing
  • update tests
  • update CI, including description.

k8s: Deploy shadowsocks internet proxy for censorship circumvention

Requirements

Deploy a internet proxy for censorship circumvention:

  • able to consistently circumvent China's GFW to provide access to blocked services.
  • scales to 1-5 users.

Develop a monitoring solution to verify Internet Proxy circumventing ability.

  • need to be notified to respond if blocking occurs.

Out of Scope

  • UDP proxying / TCP fast open.
  • sysctl optimizations.
  • High Availability.
  • Pluggable transport plugins (ie Cloak, v2ray).

Success Criteria

Test deployments circumvention ability by:

  • Deploying an instance with shadowsocks in China (Alibaba cloud).
  • Periodically accessing blocked websites (google.com, facebook.com) via shadowsocks.

Success: If blocking does not happen within 3 days.

Proposal

Deploy shadowsocks-rust on Linode LKE following this guide

Deploy test Instance in Alibaba Cloud in China:

  • Periodically accesses blocked websites (google.com, facebook.com) via shadowsocks.
  • Exposes blocking status via /metrics endpoint in Prometheus format.

Configure Prometheus to:

  • Scrape metrics exposed by test instance.
  • Alert if test instance has trouble accessing blocked website via shadowsocks.

Steps

  • write shadowsocks k8s deployment using shadowsocks-rust docker image with active probing hardening
    • K8s: service & deployment
    • Network policy to prevent shadowsocks pods from connecting to other k8s services.
    • Pod Disruption Budget to ensure at least 1 shadowsocks container is running.
  • write censorship-exporter: collects metrics on blocking with ability to use socks5 to tunnel connections.
    • exports metrics: site blocked, type of blocking (IP Blackhole, DNS Spoofing, QOS filtering, TCP reset).
  • configure Prometheus to scrape metrics from censorship-exporter.
  • add alerting when censorship-exporter is unable request censored websites.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.