Coder Social home page Coder Social logo

parmaster / zoomrs Goto Github PK

View Code? Open in Web Editor NEW
4.0 2.0 0.0 11.59 MB

Save thousands of dollars on Zoom Cloud Recording Storage! Download records automatically and store locally. Provide simple but effective web frontend to watch and share meeting recordings

License: GNU General Public License v3.0

Go 83.04% Makefile 1.85% HTML 14.79% Dockerfile 0.31%
storage-service zoom zoom-api zoom-meetings zoom-recorder zoom-cloud-recording

zoomrs's People

Contributors

dependabot[bot] avatar parmaster avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

zoomrs's Issues

CLI tool to explore zoom cloud recordings

The tool I've thought about from the very beginning, that can :

  • list cloud zoom meetings (Client.GetMeetings). Should everything be listed, not only everything config.syncable allows?
  • delete past meetings - only after confirming with every config.commander.instances (as an option)
  • download on demand, display downloading status
  • explore current repository - everything from meetings table, with statuses (color coded?) and options
  • show cloud status (client.GetCloudStorageReport)

Explore CLI interfaces options

CLI: sync cmd downloads too much

STR:

  • there are some records queued to download in the db - from 10 days ago, let's say
  • run cli sync for yesterday records: zoomrs-cli --dbg --cmd sync --days 1

ER:

  • yesterday records are synced and downloaded, other queued records remain untouched

AR:

  • yesterday records synced, download job starts and downloads all the queued records - old and new

Remove empty directories after evicting records

After evicting every record (subdir) for a date (i.e. 2023-08-01), the empty directory is left behind. So, in 3 months there are 90+ empty directories in repo
Nothing critical, but should be fixed

/status status should reflect current status

instead of hardcoded "OK" it should be :

  • LOADING if records are being downloaded
  • FAILED when there is only queued and failed records, no downloading ones
  • OK if there are only downloaded records in db

Old Records Eviction

Set some kind of limit (size or/and ttl) in config

  • for size - check sum(fileSize) of 'downloaded' records
  • for ttl - period in days after which records evicted

New 'evicted' status for records

Some kind of flag to keep some records from eviction - mark them as super important in frontend

401 Unauthorized after waking up the machine from Suspend

client.Token is not refreshing after wake up??

2023/07/25 23:10:07 [ERROR] failed to get cloud storage report, unable to get cloud storage, status 401, message: .......
>>> stack trace:
main.(*Server).statusHandler(0xc00031c1e0, {0xcf94a0, 0xc0000b6000}, 0x8?)
	/home/gusto/go/src/zoomrs/cmd/service/api.go:137 +0x2ca
net/http.HandlerFunc.ServeHTTP(0xaea140?, {0xcf94a0?, 0xc0000b6000?}, 0xc00002a70c?)
	/usr/local/go/src/net/http/server.go:2122 +0x2f
github.com/go-chi/chi/v5.(*Mux).routeHTTP(0xc000072900, {0xcf94a0, 0xc0000b6000}, 0xc000218100)
	/home/gusto/go/src/zoomrs/vendor/github.com/go-chi/chi/v5/mux.go:444 +0x216
net/http.HandlerFunc.ServeHTTP(0x41628a?, {0xcf94a0?, 0xc0000b6000?}, 0xf8?)
	/usr/local/go/src/net/http/server.go:2122 +0x2f
github.com/go-pkgz/rest.Throttle.func1.1({0xcf94a0, 0xc0000b6000}, 0x1056001?)
	/home/gusto/go/src/zoomrs/vendor/github.com/go-pkgz/rest/throttle.go:36 +0xd6
net/http.HandlerFunc.ServeHTTP(0xcf9890?, {0xcf94a0?, 0xc0000b6000?}, 0x1056000?)
	/usr/local/go/src/net/http/server.go:2122 +0x2f
github.com/go-chi/chi/v5.(*Mux).ServeHTTP(0xc000072900, {0xcf94a0, 0xc0000b6000}, 0xc000218000)
	/home/gusto/go/src/zoomrs/vendor/github.com/go-chi/chi/v5/mux.go:90 +0x310
net/http.serverHandler.ServeHTTP({0xc0000ca3f0?}, {0xcf94a0, 0xc0000b6000}, 0xc000218000)
	/usr/local/go/src/net/http/server.go:2936 +0x316
net/http.(*conn).serve(0xc0003ae000, {0xcf9938, 0xc000332630})
	/usr/local/go/src/net/http/server.go:1995 +0x612
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:3089 +0x5ed


Rate limits should be configurable

Medium rate limit APIs for the reference are limited:

  • Free account: 2 requests/second, 2000 requests/day
  • Pro account: 20 requests/second

It's hardcoded for Free plan account

Schedule download job

Like 4 am, for the cases when it is hosted on some weak VPS server and it can't download simultaneously with multiple upload (watch) jobs.
Food for thought

Rethink data directory structure

Like /data/<date>/<id> instead of /data/<id>, so it's easier to browse and manipulate manually.

Will require some tool to sort everything that already loaded.

Readme

Make a Readme before making repo public

Cli tool: keeping cloud storage under the cap

Kind of independent circuit breaker - last line defense against a cloud storage bill

Parameters?

  • how far in the past to look? How to deal with big number like 600 days?
  • cap size - how much data to keep (FIFO)

Cold Storage

So I have an external HDD connected to one of the servers an I copy recordings there for redundancy (third backup). It would be nice to automate this process. Initial thoughts:

  • define some kind of "cold storage" path in config, probably in "commander" section, so it's just "copy and forget" process, without any downloading
  • FIFO type of eviction strategy - delete latest if there's no space for the new files
  • evict as many "days" (since the recordings grouped in yyyy-mm-dd folders) as needed to make space for new files (any fail-safes??)
  • CLI tool running regularly
  • should it look into DB or be like some kind of "special type of rsync"?
  • cold storage can be either bigger or smaller that the primary repository storage device and use the whole allocated space - either it's 10% or 400% of the main storage

Multiserver config issues

Running service on multiple servers for data duplication reasons shown an issue with trashing records on zoom server. There's no reliable way to avoid races. One service can't be sure that the other already downloaded the records to trash them.
The solution might be:

  • there's still trashing functionality as it is right now, for single-server usage
  • create another app that can be ran by cron at the main production server, giving enough time for backup servers to download everything...

it requires to rework directories to allow multiple apps in /cmd/ folder like

  • /cmd/server - for main server that does downloads and serving webpages
  • /cmd/sync - to run on backup servers, that can download records on demand. Parameter like -trash would download and trash records. It can call /status API of main server to make sure there's no failed or downloading records left

Migrate data between instances

Talking about DB first of all. The scenario:
Something went wrong on one of the instances, but there are instances that have all the data (recordings and metadata in DB). It caused the -cmd=trash to fail. The cloud storage reached client.cloud_capacity_hard_limit and the -cmd=cloudcap started to force evict recordings from the cloud. It went unnoticed for some time. There should be possibility to sync the instance that lagged behind from some instance that has missing files and metadata

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.