Coder Social home page Coder Social logo

hotg-ai / rune Goto Github PK

View Code? Open in Web Editor NEW
132.0 9.0 15.0 47.23 MB

Rune provides containers to encapsulate and deploy edgeML pipelines and applications

License: Apache License 2.0

Rust 89.33% Jupyter Notebook 1.22% Dockerfile 0.20% TypeScript 9.25%
tinyml rust containerization edge-computing edgeml

rune's People

Contributors

akshr avatar dependabot[bot] avatar f0rodo avatar ge-te avatar jonas-schievink avatar kthakore avatar meelislootus avatar mi1ind avatar michael-f-bryan avatar saidinesh5 avatar stranger6667 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rune's Issues

Model Input for Person Detection needs fix

the to_rgb8 function converts pixels to a value between 0 and 255.

each pixel converted to its rgb value should have 3 numbers so the size that the model should read is 96 x 96 x 3 (with the image being 96 x 96 pixels large), but the length of the model being read is 96 x 96.

This is what the first 2 pixels are supposed to be

(x, y) position - RGB Value
1, 1 - 212, 212, 212
2, 1 - 213, 213, 213

This is part of the first line of the data sent to the Model:

[2021-03-18T23:32:09.529Z DEBUG rune_runtime::runtime] Model 1 input: [212, 212, 212, 213, 213, 213,

Expected and actual data values are the same, so the model is only reading 1/3 of the photo which is why it keeps outputting no person.

Infer types based on adjacent steps in a pipeline

The Runefile syntax lets you write code like this:

CAPABILITY<I32> rand RAND --n 1
PROC_BLOCK<_,_> mod360 hotg-ai/pb-mod --modulo 100
MODEL<I32, F32> sine ./sinemodel.tflite

In this case, the proc block has left its input and output arguments unspecified (_), but because we know the rand capability before returns a single i32, and the model after it accepts a single i32, we know the mod360 proc block must map i32 -> i32.

We don't actually infer this type information at the moment, though. Instead, leaving it up to the Rust compiler to figure out.

  • Infer input/output types based on adjacent nodes in the pipeline
  • Load a TensorFlow Lite model and check its metadata to find out its input and output types
  • Check proc blocks (will be harder because they can use generics which can have arbitrarily complex trait bounds)

microspeech calibrate_models branch bug

I think the issue might be i8 implementation in runic-types: this was a previous attempt of this, which we did not merge into master: ed8547b

Looks like potentially changes are required under runic-types/value.rs and runic-types/buffer.rs

Segfault inside `gemmlowp` on mac

While trying to add integration tests for the person detection rune in #128 I noticed that we get an out-of-bounds memory access in just the mac build.

$ rune run person_detection.rune --capability image:examples/person_detection/image_grayscale.png
[2021-04-28T08:57:09.330Z INFO  rune::run] Running rune: /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/.tmpCzTqas/person_detection.rune
[2021-04-28T08:57:09.330Z DEBUG rune::run] Loading an image from \"/Users/runner/work/rune/rune/examples/person_detection/image_grayscale.png\"
[2021-04-28T08:57:10.381Z DEBUG rune_wasmer_runtime] Loading image
[2021-04-28T08:57:10.381Z DEBUG rune_wasmer_runtime] Instantiating the WebAssembly module
[2021-04-28T08:57:10.385Z DEBUG rune_wasmer_runtime] Loaded the Rune
[2021-04-28T08:57:10.385Z DEBUG rune_wasmer_runtime] Running the rune
Error: Call failed

Caused by:
    0: Unable to call the _call function
    1: RuntimeError: out of bounds memory access
    2: heap_get_oob

See this GitHub Actions run for more.

Instance context is given a dangling pointer

When we set the instance's context we give it a pointer to a local variable which will be destroyed when VM::init() returns.

Instead we should box the state and pass ownership of it to the Instance. To avoid leaking memory, Instance also gives us a data_finaliser which will let us automatically clean up the state when it is destroyed.

We'll be using the data pointer for maintaining state between WebAssembly calls and communicating with the outside world, so I'd pull things out into a State struct.

struct VM { ... }

impl VM {
  fn init(...) -> Self {
    ...

  let state = Box::new(State::default());
  let mut ctx = instance.context_mut();

  // Pass ownership of the boxed `State` to the `Instance`
  ctx.data = Box::into_raw(state) as *mut c_void;
  // then tell the `Instance` how to destroy it afterwards
  ctx.data_finaliser(Some(|data: *mut c_void| {
    // then turn the `data` back into a Box<State> so it can be destroyed
    let _ = Box::from_raw(data as *mut State);
  });

  ...
  }
}

/// Bag of all the state we want to use for communicating between WebAssembly and the host.
struct State {
  provider: Provider,
}

Runefile Parsing

Allow developers to create a rune file to be read, using the rune CLI tool

What is Done

Ability to run

  1. rune build .
  2. Output is a an AST or interm format. that build command can use to make WASM

Memory access out of bounds while running the Gesture Rune

It looks like the gesture rune triggers a memory out-of-bounds error inside _call().

When I ran it in GDB, I saw that wasmer caught a segfault in the middle of a bunch of WebAssembly code. The backtrace was entirely useless because we're in the middle of some JIT-compiled code that doesn't have debug info and we can't see any stack frames above where we enter the WebAssembly.

This might be a bug in wasmer.

Alternatively, I could have messed up the way we stash the pipeline in a global variable (the static PIPELINE: Option<Box<dyn FnMut()>>) so we try to interpret garbage memory as a function pointer.

Steps to Reproduce
$ cd hotg-ai/rune && git checkout ae5b65778bd8913b14f418ab8ba398a10899c2e6

$ cargo rune build examples/gesture/Runefile
    Finished release [optimized] target(s) in 0.87s
     Running `target/release/rune build examples/gesture/Runefile`
[2021-03-03T06:52:00.677Z DEBUG rune::build] Parsing "examples/gesture/Runefile"
warning: Unknown type
   ┌─ examples/gesture/Runefile:11:20

11 │ PROC_BLOCK<f32[4], UTF8> label hotg-ai/rune#proc_blocks/ohv_label --labels=Wing,Ring,Slope,Unknown
   │                    ^^^^

[2021-03-03T06:52:00.678Z DEBUG rune::build] Compiling gesture in "/home/michael/.cache/runes/gesture"
[2021-03-03T06:52:00.681Z DEBUG rune_codegen] Executing "cargo" "build" "--target=wasm32-unknown-unknown" "--quiet" "--release"
[2021-03-03T06:52:01.779Z DEBUG rune::build] Generated 62500 bytes

$ RUST_BACKTRACE=1 cargo rune run examples/gesture/gesture.rune
    Finished release [optimized] target(s) in 0.05s
     Running `target/release/rune run examples/gesture/gesture.rune`
[2021-03-03T06:52:34.792Z INFO  rune::run] Running rune: examples/gesture/gesture.rune
[2021-03-03T06:52:34.792Z DEBUG rune_runtime::runtime] Compiling the WebAssembly to native code
[2021-03-03T06:52:34.800Z DEBUG rune_runtime::runtime] Instantiating the WebAssembly module
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::context] Requested capability RAND with ID 1
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::context] Setting n=Integer(384) on capability 1
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::context] Loaded model 2 with inputs [TensorInfo { name: "conv2d_input", element_kind: kTfLiteFloat32, dims: [1, 128, 3, 1] }] and outputs [TensorInfo { name: "Identity", element_kind: kTfLiteFloat32, dims: [1, 4] }]
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::runtime] Loaded the Rune
[2021-03-03T06:52:34.801Z INFO  rune::run] Call 0
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::runtime] Running the rune
Error: Call failed

Caused by:
    0: Unable to call the _call function
    1: Error when calling invoke: A `memory out-of-bounds access` trap was thrown at code offset 0

Stack backtrace:
   0: rune_runtime::runtime::runtime_error
   1: rune_runtime::runtime::Runtime::call
   2: rune::main
   3: std::sys_common::backtrace::__rust_begin_short_backtrace
   4: std::rt::lang_start::{{closure}}
   5: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
             at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/core/src/ops/function.rs:259:13
      std::panicking::try::do_call
             at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/std/src/panicking.rs:379:40
      std::panicking::try
             at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/std/src/panicking.rs:343:19
      std::panic::catch_unwind
             at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/std/src/panic.rs:431:14
      std::rt::lang_start_internal
             at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/std/src/rt.rs:51:25
   6: main
   7: __libc_start_main
   8: _start

Latest Master doesn't work with Runefile

The current runefile for Gesture doesn't work in commit 7e5241b

cargo run --bin rune -- build examples/gesture/Runefile

 Finished dev [unoptimized + debuginfo] target(s) in 4m 28s
     Running `target/debug/rune build examples/gesture/Runefile`
[2021-02-28T20:01:23Z INFO  rune] Rune v0.0.2
[2021-02-28T20:01:23Z ERROR runefile_parser::parser] Step doesn't follow expected grammar: CAPABILITY<F32[384]> accelerometer ACCEL -n 128

Optimise Rune compilation times

We are seeing issues in hammerd where the time it takes for rune build to complete is becoming prohibitive.

Possible ways to improve compilation are:

  • Pre-fetch all common dependencies - it seems like the time required to download dependencies, especially our proc blocks (git dependencies require fetching the entire repo) is a bit of a bottleneck
  • Pre-compile common dependencies - build the dependencies inside the Docker image then just copy them (maybe use hard links to avoid actual copying) into the directory used to build a Rune
  • Aggressively remove dependencies from proc blocks and runic-types - this feels like an easy win because we pull in serde-derive and serde-json. We could always manually write some code that generates JSON strings
  • Use some sort of shared build cache like sccache

proc_block for gesture prediction

Creating a gesture predictor.

Input taken: TFlite - array [f32; 64]

Look at this for reference.

Further Improvements: adding confidence threshold to stop uncertain gesture from being displayed. Can be placed before .map

Make github actions act again

We've recently used up the organisation's CI minutes, which is blocking several PRs from being merged (#76, #74, #63).

To resolve this and make sure it doesn't happen again, we need to:

  • Make the appropriate changes to the organisation's plan (i.e. buy more minutes or change plans)
  • Do an analysis of our current usage and optimise builds
    • Where are we spending our CI minutes?
    • What can we do to reduce build time (remove unnecessary jobs, caching, etc.)?
  • Explore alternate options
    • Different providers (Travis, Circle CI, etc.)
    • Use on-premises machines for really long/expensive builds (e.g. mac)

CC: @kthakore

Rune doesn't work on Ubuntu 18.XX

I get this error lib/x86_64-linux-gnu/libm.so.6: version GLIBC_2.29' not found (required by ./rune)`

When using the nightly rune prebuilt executable

Dynamic WASM Memory Limit

Modify/Allocate required memory size as needed so we never run into out of bounds memory error.
Make it like a rune.json file (similar to package.json).

Make the Python bindings to our proc blocks compile reliably on mac

As part of #100 I found that compiling the proc_blocks/python/ code fails to link to libpython on a Mac. For now I've skipped generating the Python bindings on Mac, but ideally we'd figure out why it fails to link.

            "_PyBytes_AsString", referenced from:
                pyo3::types::string::_$LT$impl$u20$pyo3..conversion..FromPyObject$u20$for$u20$alloc..string..String$GT$::extract::h9638ede2048f915d in libpyo3-50db6c8a19c7c585.rlib(pyo3-50db6c8a19c7c585.pyo3.43lgczmm-cgu.12.rcgu.o)
                pyo3::types::string::PyString::to_string_lossy::hd5324a2f0ea4480f in libpyo3-50db6c8a19c7c585.rlib(pyo3-50db6c8a19c7c585.pyo3.43lgczmm-cgu.14.rcgu.o)
                pyo3::types::string::_$LT$impl$u20$pyo3..conversion..FromPyObject$u20$for$u20$$RF$str$GT$::extract::hfb31b7a3a75da54d in libpyo3-50db6c8a19c7c585.rlib(pyo3-50db6c8a19c7c585.pyo3.43lgczmm-cgu.14.rcgu.o)
            "_PyErr_Restore", referenced from:
                proc_blocks::normalize::__init2035879285165519677::__wrap::h34fbedbf00b614f2 in proc_blocks.proc_blocks.7y4hrf10-cgu.10.rcgu.o
                proc_blocks::normalize::__init2035879285165519677::__wrap::h492ab7dcc548bacd in proc_blocks.proc_blocks.7y4hrf10-cgu.10.rcgu.o
                proc_blocks::fft::__init8185045372208238284::__wrap::hac9ea221cb21e5b4 in proc_blocks.proc_blocks.7y4hrf10-cgu.5.rcgu.o
                proc_blocks::fft::__init8185045372208238284::__wrap::hdc12c9f375452fc8 in proc_blocks.proc_blocks.7y4hrf10-cgu.5.rcgu.o
                proc_blocks::fft::__init8185045372208238284::__wrap::h0d8eb61d3689dea4 in proc_blocks.proc_blocks.7y4hrf10-cgu.5.rcgu.o
                proc_blocks::fft::__init8185045372208238284::__wrap::hfa8deb03b803ee11 in proc_blocks.proc_blocks.7y4hrf10-cgu.5.rcgu.o
                _PyInit_proc_blocks in proc_blocks.proc_blocks.7y4hrf10-cgu.8.rcgu.o
                ...
          ld: symbol(s) not found for architecture x86_64
          clang: error: linker command failed with exit code 1 (use -v to see invocation)

See the "Build release artifacts for macos-latest" logs: logs_335.zip

How can we run models in the browser?

As part of our overall plan for the project, we want to run Runes in the web browser. It should be simple enough to port our runtime to wasm-bindgen (e.g. rune-web-runtime), the problem is that we currently use TensorFlow Lite models when libraries like Tensorflow.js use full TensorFlow models.

The rune-web-runtime will be able to call into JavaScript, so we have access to both Rust and JavaScript libraries.

  • Use Tensorflow.js and convert TensorFlow Lite to TensorFlow models on the fly or ahead of time
  • Cross-compile Apache TVM to WebAssembly

@meelislootus and @kthakore, let me know if there are some alternatives I've missed.

Runefile syntax for multiple inputs and outputs

I'm trying to figure out the syntax for a Runefile where each stage might have multiple inputs and multiple outputs, and I was hoping to get some input.

Example

As an example, let's do some DSP preprocessing which takes data from the audio capability and passes it through a fft preprocessor. Then on the other side we have samples from the accelerometer being passed through a normalize step to make the data independent of how far/hard you move the device.

Next we take the outputs from fft and accelerometer and send them to a model. This model sends a list of probabilities to the label proc block which then turns it to a UTF-8 string that gets sent to the main output.

We've also got a debug output which taps into fft , model, and label and you may want to disable independently.

That's a lot of words, but we're essentially trying to construct this pipeline:

DeepinScreenshot_select-area_20210502174923

Concept 1 - Modified Runefile

One syntax idea is to use --input some_stage and --input-type f32[42] for declaring a stage's input(s), with the corresponding --output some_stage and --output-type i8[1960] for specifying the output stage and its type. Under this model you can have multiple inputs but a single output (but that output may be copied to multiple stages).

This means we'd stop using angle brackets for input/output types because I think it'd be hard to adapt the PROC_BLOCK<input_type, output_type> syntax to multiple inputs/outputs without it being ugly or turning into punctuation soup (e.g. PROC_BLOCK<(i32[1920, 1080], f32[128, 3]), f32[4]> model ...).

FROM runicos/base

CAPABILITY audio SOUND --hz 16000 --output-type I16[16000]
CAPABILITY accelerometer ACCEL -n 128 --output-type F32[128, 3]

PROC_BLOCK fft hotg-ai/rune#proc_blocks/fft \
    --input audio --input-type I16[16000] \
    --output-type I8[1960]
PROC_BLOCK normalize hotg-ai/rune#proc_blocks/normalize \
    --input accelerometer --input-type f32[128, 3] \
    --output-type f32[128, 3]
MODEL model ./model.tflite --input audio \
    --input fft \
    --input normalize --input-type f32[128, 3] \
    --output-type f32[4]
PROC_BLOCK label hotg-ai/rune#proc_blocks/ohv_label \
    --labels=Wing,Ring,Slope,Unknown \
    --input model \
    --output-type UTF8

OUT output SERIAL --input fft --input model --input label
OUT debug SERIAL --input label

That would be a relatively minor change to the language syntax and probably take a couple hours of fiddling in the parser. Unfortunately it makes a Runefile context sensitive (a --input-type argument can only follow an --input and some arguments are only specific to certain commands) which isn't nice from a language design standpoint.

Concept 2 - Lua

Alternatively, instead of writing our own DSL we could leverage an existing one (e.g. Lua with our own custom functions):

audio = capability {
    kind = "SOUND",
    output_type = i16(16000),
    args = { hz = 16000 },
}

accelerometer = capability { kind = "ACCEL", output_type = f32(128, 3) }

normalize = proc_block {
    source = "hotg-ai/rune#proc_blocks/normalize",
    input = accelerometer,
    output_type = f32(128, 3),
}

model = model {
    filename = "./model.tflite",
    input = { audio, accelerometer },
    output_type = f32(4),
}

label = proc_block {
    source = "hotg-ai/rune#proc_blocks/ohv_label",
    input = model,
    output_type = utf8,
    args = {
        labels = { "Wing", "Ring", "Slope", "Unknown" },
    },
}

output = out { kind = "SERIAL", input = { label } }
debug = out { kind = "SERIAL", input = { fft, model, label } }

I'd maybe consider the above for a Runefile v3 seeing as it's quite a large change.

More Expressive Pipelines

This is a tracking issue for letting users construct more expressive pipelines in their Runefiles, specifically:

  • Allow pipelines to form an acyclic directed graph, where each stage may have multiple inputs and multiple outputs
    • Decide on a syntax and implement it in the parser/analyser (#140)
    • Pipeline stages can have multiple inputs
    • Pipeline stages can have multiple outputs ("multiple outputs" - #158)
    • The same stage output can be used as the input for multiple other stages ("single shared output")

Some optional additions to this would be:

  • Create a rune graph command which takes a Runefile and generates a visual diagram of the pipeline, complete with input/output types

The microspeech model expects 1.5s but our input files are only 1s long

The microspeech Runefile expects to receive 24,000 samples of audio at 16kHz, but yes_01d22d03_nohash_0.wav and friends are only 16,000 samples long.

This isn't a problem when you compile the Rune in release mode (rune build Runefile), but when compiled in debug mode (rune build Runefile --debug) we trigger this assertion.

[2021-03-25T18:30:19.671Z INFO ] panicked at 'assertion failed: `(left == right)`
    left: `32000`,
   right: `48000`', /home/michael/Documents/hotg-ai/rune/runic-types/src/wasm32/mod.rs:92:9
Error: Call failed

Caused by:
    0: Unable to call the _call function
    1: RuntimeError: unreachable
           at core::core_arch::wasm32::unreachable::hb8a7ba5af00cd3dd (<module>[1099]:0x3d931)
           at rust_begin_unwind (<module>[1051]:0x3c101)
           at core::panicking::panic_fmt::hfa15f5472ef5e557 (<module>[1428]:0x4aed4)
           at core::panicking::assert_failed_inner::h1ff1547b4e20ab23 (<module>[1442]:0x4baf6)
           at core::panicking::assert_failed::h4a336faee37010c7 (<module>[968]:0x3786c)
           at runic_types::wasm32::copy_capability_data_to_buffer::h3df68d48e80f298c (<module>[225]:0x97da)
           at <runic_types::wasm32::sound::Sound<_> as runic_types::pipelines::Source>::generate::h4d22008f54108f0a (<module>[125]:0x578a)
           at microspeech::_manifest::{{closure}}::h31e7eddd059d1b5c (<module>[107]:0x4300)
           at <alloc::boxed::Box<F,A> as core::ops::function::FnMut<Args>>::call_mut::he6db486207a4a9a2 (<module>[55]:0x26c0)
           at _call (<module>[127]:0x5e9a)
    2: unreachable

However, if you update the capability and proc blocks in the Runefile to work with a I16[16000] (so our debug assertion is happy), the model now thinks our "yes" example is "silence" and tests start failing. I'm guessing all those trailing zeroes make a difference once they go through the FFT.

@meelislootus do you have any idea what's going on here?

{May 7th} Microspeech Calibration

Make microspeech working on the example collab

  • Tensorflow C code is walk through is done
  • Rust \ C mapping is almost done
  • Issue is difference is Sonogram FFT

Closer need a few more days + Blog posts

Unable to run runefile-parser tests due to dependency cycle

When runic_pb_fft::Processor implements the runic_types::proc_block::ProcBlock trait it adds runic_types as a git dependency.

Like any normal trait, when we try to use the Processor's process() method inside runefile-parser tests we get the following compile error:

error[E0599]: no method named `process` found for struct `Processor` in the current scope
  --> runefile-parser/src/lib.rs:38:20
   |
38 |         return fft.process(waveform, map);
   |                    ^^^^^^^
   |
   = help: items from traits can only be used if the trait is in scope
   = note: the following trait is implemented but not in scope; perhaps add a `use` for it:
           `use runic_types::proc_block::ProcBlock;`

Makes sense, we need to pull in the runic_types::proc_block::ProcBlock trait before we can use it... But that doesn't work.

It looks like runic-pb-fft has added runic-types (from the hotg-ai/rune repository) as a git dependency and the runic_types::proc_block::ProcBlock that we use inside runefile-parser comes from ../runic-types. Cargo sees these two as completely different crates, meaning we are importing the wrong ProcBlock.

The two options I see are:

  1. Move runic-pb-fft inside the rune repository so we aren't using git dependencies, or
  2. Move runic-types into its own repository and make both runefile-parser and runic-pb-fft depend on it using git dependencies, probably also pinning the commit (e.g. runic-types = { git = "...", rev = "abcd1234" })

Do we need to include cargo as a dependency?

Considering we need rustc installed on the user's PC so we can compile things anyway (cargo just shells out to it), what benefit do we gain by pulling in cargo as a library instead of invoking it via std::process::Command?

Dropping the cargo dependency will probably halve the crate's build time and remove a bunch of complexity.

What is our story around streamed data?

In today's sync with @kthakore, @Mi1ind, and @meelislootus, Meelis said that in the ML community you would handle streamed data by constructing/preprocessing your inputs to only ever be the size you care about.

Say you were implementing fall detection in real time on the phone I'm guessing you'd set the app up so that the Rune will be invoked periodically (e.g. every second). Then as part of initializing the accelerometer capability you'll ask to receive the last 500 samples, sampling at a sample rate of (for example) 100 Hz.

It may look something like this in a Runefile:

CAPABILITY<f32[500]> accelerometer ACCEL --sample-rate 100

(note: the number of samples is determined by the capability output dimensions)

On the PC side (rune run) you would provide the CLI with a list of accelerometer samples and the runtime could be told to run for X seconds. When we ask a capability for data we would tell it how many seconds have passed since we started and using the sample rate the capability could figure out which 500 samples to send back to the Rune.

Person Detection: Image/Video

Make the runefile and any post proc blocks.

  • Update C code to TFlite
  • Post proc block for labelling
  • Want normalization pblock

Set up the SSH agent for CI

We accidentally broke master when #51 was merged because the fft proc block tries to pull in sonograph (a private hotg-ai repo) as a git dependency.

CI Failure Message
/usr/share/rust/.cargo/bin/cargo check --workspace --verbose
    Updating crates.io index
    Updating git repository `ssh://[email protected]/hotg-ai/sonogram`
error: failed to get `sonogram` as a dependency of package `FFT v0.1.0 (/home/runner/work/rune/rune/proc_blocks/fft)`
Error: failed to get `sonogram` as a dependency of package `FFT v0.1.0 (/home/runner/work/rune/rune/proc_blocks/fft)`
Caused by:
  failed to load source for dependency `sonogram`

Caused by:
  Unable to update ssh://[email protected]/hotg-ai/sonogram#39d4c460

Caused by:
  failed to clone into: /home/runner/.cargo/git/db/sonogram-b77ca210ffbc4b64

Caused by:
  failed to authenticate when downloading repository

  * attempted ssh-agent authentication, but no usernames succeeded: `git`

  if the git CLI succeeds then `net.git-fetch-with-cli` may help here
  https://doc.rust-lang.org/cargo/reference/config.html#netgit-fetch-with-cli

Caused by:
  error authenticating: no auth sock variable; class=Ssh (23)

To let CI use other private repos we need to:

After that, SSH authentication should work and CI will pass again.

If it doesn't, we might need to tell cargo to shell out to git instead of using the compiled-in libssh2 by setting git-fetch-with-cli = true under the [net] table in .cargo/config.

We may also want to set up GitHub's protected branches so nobody can push directly to master and CI needs to pass before a PR can be merged.

Tutorials and Documentation

  • Onboarding tutorials
  • Developer Documents
    Less about internals

Feedback:

  • Assume built binaries

Mini Release:

  • Release hotg.dev

Onboarding Session


Some long-form tutorials that we can publish to tinyVerse:

  • What's in a Rune? Explain the various concepts in a Rune and how they are implemented - @Michael-F-Bryan
  • Creating your own proc block
    • Components to build a proc block
    • Example implementation of proc block (e.g. normalize, FFT, labeling, or debounce)
  • Embedding the Rune runtime in a desktop/server application from Rust - @Michael-F-Bryan
  • Embedding the Rune runtime in a C++ application - @Michael-F-Bryan
  • Embedding the Rune runtime in an iOS app @Ge-te
  • How does an App user use Rune, hammer and RuneVM to build app (End of May)

Memory Leak/Fragmentation

We found an issue where running the microspeech Rune in an infinite loop will keep consuming memory until we've exhausted WebAssembly's 32-bit address space.

Steps to reproduce:

$ git checkout ea982ab
$ cargo --version --verbose
cargo 1.52.0-nightly (90691f2bf 2021-03-16)
release: 1.52.0
commit-hash: 90691f2bfe9a50291a98983b1ed2feab51d5ca55
commit-date: 2021-03-16
$ rustc --version --verbose
rustc 1.52.0-nightly (36f1f04f1 2021-03-17)
binary: rustc
commit-hash: 36f1f04f18b89ba4a999bcfd6584663fd6fc1c5d
commit-date: 2021-03-17
host: x86_64-unknown-linux-gnu
release: 1.52.0-nightly
LLVM version: 12.0.0

$ cargo rune build examples/microspeech/Runefile
[2021-03-18T10:15:13.110Z DEBUG rune::build] Parsing "examples/microspeech/Runefile"
[2021-03-18T10:15:13.111Z DEBUG rune::build] Compiling microspeech in "/home/michael/.cache/runes/microspeech"
[2021-03-18T10:15:13.114Z DEBUG rune_codegen] Executing "cargo" "build" "--target=wasm32-unknown-unknown" "--quiet" "--release"
[2021-03-18T10:15:14.062Z DEBUG rune::build] Generated 58231 bytes

$ cargo rune run examples/microspeech/microspeech.rune --capability sound:examples/microspeech/data/no_b66f4f93_nohash_8.wav --repeats 1000000000

DeepinScreenshot_select-area_20210317044312

# Rune Run

Able to have rune CLI to execute web assembly

What is Done

  1. Need a way for rune CLI simulate audio and accelerometer
  2. Output log files from model

{June} Research & Implement Runic VM SDK into Browser

Make rune work on Browser with all 4 runes.

  • Linking everything up with webassembly runtime
  • Executing tf.js
  • Capabilities from JS / Browser
  • update tfjs-tflite integration into our RuneVM js
  • Part of tutorial: Put this into a PWA app

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.