hotg-ai / rune Goto Github PK
View Code? Open in Web Editor NEWRune provides containers to encapsulate and deploy edgeML pipelines and applications
License: Apache License 2.0
Rune provides containers to encapsulate and deploy edgeML pipelines and applications
License: Apache License 2.0
Re-train our existing models using data from the phone.
the to_rgb8
function converts pixels to a value between 0 and 255.
each pixel converted to its rgb value should have 3 numbers so the size that the model should read is 96 x 96 x 3
(with the image being 96 x 96
pixels large), but the length of the model being read is 96 x 96
.
This is what the first 2 pixels are supposed to be
(x, y) position - RGB Value
1, 1 - 212, 212, 212
2, 1 - 213, 213, 213
This is part of the first line of the data sent to the Model:
[2021-03-18T23:32:09.529Z DEBUG rune_runtime::runtime] Model 1 input: [212, 212, 212, 213, 213, 213,
Expected and actual data values are the same, so the model is only reading 1/3 of the photo which is why it keeps outputting no person.
--out
The Runefile syntax lets you write code like this:
CAPABILITY<I32> rand RAND --n 1
PROC_BLOCK<_,_> mod360 hotg-ai/pb-mod --modulo 100
MODEL<I32, F32> sine ./sinemodel.tflite
In this case, the proc block has left its input and output arguments unspecified (_
), but because we know the rand
capability before returns a single i32
, and the model after it accepts a single i32
, we know the mod360
proc block must map i32 -> i32
.
We don't actually infer this type information at the moment, though. Instead, leaving it up to the Rust compiler to figure out.
I think the issue might be i8 implementation in runic-types: this was a previous attempt of this, which we did not merge into master: ed8547b
Looks like potentially changes are required under runic-types/value.rs
and runic-types/buffer.rs
Potentially split the labelling and debouncer
While trying to add integration tests for the person detection rune in #128 I noticed that we get an out-of-bounds memory access in just the mac build.
$ rune run person_detection.rune --capability image:examples/person_detection/image_grayscale.png
[2021-04-28T08:57:09.330Z INFO rune::run] Running rune: /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/.tmpCzTqas/person_detection.rune
[2021-04-28T08:57:09.330Z DEBUG rune::run] Loading an image from \"/Users/runner/work/rune/rune/examples/person_detection/image_grayscale.png\"
[2021-04-28T08:57:10.381Z DEBUG rune_wasmer_runtime] Loading image
[2021-04-28T08:57:10.381Z DEBUG rune_wasmer_runtime] Instantiating the WebAssembly module
[2021-04-28T08:57:10.385Z DEBUG rune_wasmer_runtime] Loaded the Rune
[2021-04-28T08:57:10.385Z DEBUG rune_wasmer_runtime] Running the rune
Error: Call failed
Caused by:
0: Unable to call the _call function
1: RuntimeError: out of bounds memory access
2: heap_get_oob
See this GitHub Actions run for more.
Using generics to create a single aggregator proc block which can consume primitive datatypes as required.
When we set the instance's context we give it a pointer to a local variable which will be destroyed when VM::init()
returns.
Instead we should box the state and pass ownership of it to the Instance
. To avoid leaking memory, Instance
also gives us a data_finaliser
which will let us automatically clean up the state when it is destroyed.
We'll be using the data
pointer for maintaining state between WebAssembly calls and communicating with the outside world, so I'd pull things out into a State
struct.
struct VM { ... }
impl VM {
fn init(...) -> Self {
...
let state = Box::new(State::default());
let mut ctx = instance.context_mut();
// Pass ownership of the boxed `State` to the `Instance`
ctx.data = Box::into_raw(state) as *mut c_void;
// then tell the `Instance` how to destroy it afterwards
ctx.data_finaliser(Some(|data: *mut c_void| {
// then turn the `data` back into a Box<State> so it can be destroyed
let _ = Box::from_raw(data as *mut State);
});
...
}
}
/// Bag of all the state we want to use for communicating between WebAssembly and the host.
struct State {
provider: Provider,
}
Allow developers to create a rune file to be read, using the rune CLI tool
Ability to run
rune build .
It looks like the gesture rune triggers a memory out-of-bounds error inside _call()
.
When I ran it in GDB, I saw that wasmer
caught a segfault in the middle of a bunch of WebAssembly code. The backtrace was entirely useless because we're in the middle of some JIT-compiled code that doesn't have debug info and we can't see any stack frames above where we enter the WebAssembly.
This might be a bug in wasmer
.
Alternatively, I could have messed up the way we stash the pipeline in a global variable (the static PIPELINE: Option<Box<dyn FnMut()>>
) so we try to interpret garbage memory as a function pointer.
$ cd hotg-ai/rune && git checkout ae5b65778bd8913b14f418ab8ba398a10899c2e6
$ cargo rune build examples/gesture/Runefile
Finished release [optimized] target(s) in 0.87s
Running `target/release/rune build examples/gesture/Runefile`
[2021-03-03T06:52:00.677Z DEBUG rune::build] Parsing "examples/gesture/Runefile"
warning: Unknown type
┌─ examples/gesture/Runefile:11:20
│
11 │ PROC_BLOCK<f32[4], UTF8> label hotg-ai/rune#proc_blocks/ohv_label --labels=Wing,Ring,Slope,Unknown
│ ^^^^
[2021-03-03T06:52:00.678Z DEBUG rune::build] Compiling gesture in "/home/michael/.cache/runes/gesture"
[2021-03-03T06:52:00.681Z DEBUG rune_codegen] Executing "cargo" "build" "--target=wasm32-unknown-unknown" "--quiet" "--release"
[2021-03-03T06:52:01.779Z DEBUG rune::build] Generated 62500 bytes
$ RUST_BACKTRACE=1 cargo rune run examples/gesture/gesture.rune
Finished release [optimized] target(s) in 0.05s
Running `target/release/rune run examples/gesture/gesture.rune`
[2021-03-03T06:52:34.792Z INFO rune::run] Running rune: examples/gesture/gesture.rune
[2021-03-03T06:52:34.792Z DEBUG rune_runtime::runtime] Compiling the WebAssembly to native code
[2021-03-03T06:52:34.800Z DEBUG rune_runtime::runtime] Instantiating the WebAssembly module
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::context] Requested capability RAND with ID 1
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::context] Setting n=Integer(384) on capability 1
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::context] Loaded model 2 with inputs [TensorInfo { name: "conv2d_input", element_kind: kTfLiteFloat32, dims: [1, 128, 3, 1] }] and outputs [TensorInfo { name: "Identity", element_kind: kTfLiteFloat32, dims: [1, 4] }]
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::runtime] Loaded the Rune
[2021-03-03T06:52:34.801Z INFO rune::run] Call 0
[2021-03-03T06:52:34.801Z DEBUG rune_runtime::runtime] Running the rune
Error: Call failed
Caused by:
0: Unable to call the _call function
1: Error when calling invoke: A `memory out-of-bounds access` trap was thrown at code offset 0
Stack backtrace:
0: rune_runtime::runtime::runtime_error
1: rune_runtime::runtime::Runtime::call
2: rune::main
3: std::sys_common::backtrace::__rust_begin_short_backtrace
4: std::rt::lang_start::{{closure}}
5: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/core/src/ops/function.rs:259:13
std::panicking::try::do_call
at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/std/src/panicking.rs:379:40
std::panicking::try
at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/std/src/panicking.rs:343:19
std::panic::catch_unwind
at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/std/src/panic.rs:431:14
std::rt::lang_start_internal
at /rustc/4f20caa6258d4c74ce6b316fd347e3efe81cf557/library/std/src/rt.rs:51:25
6: main
7: __libc_start_main
8: _start
The current runefile for Gesture doesn't work in commit 7e5241b
cargo run --bin rune -- build examples/gesture/Runefile
Finished dev [unoptimized + debuginfo] target(s) in 4m 28s
Running `target/debug/rune build examples/gesture/Runefile`
[2021-02-28T20:01:23Z INFO rune] Rune v0.0.2
[2021-02-28T20:01:23Z ERROR runefile_parser::parser] Step doesn't follow expected grammar: CAPABILITY<F32[384]> accelerometer ACCEL -n 128
We are seeing issues in hammerd
where the time it takes for rune build
to complete is becoming prohibitive.
Possible ways to improve compilation are:
runic-types
- this feels like an easy win because we pull in serde-derive
and serde-json
. We could always manually write some code that generates JSON stringssccache
Debouncer causes mobile app to receive "<MISSING>" output.
App now retries until model output is received.
Happens in gesture and person detection.
We've recently used up the organisation's CI minutes, which is blocking several PRs from being merged (#76, #74, #63).
To resolve this and make sure it doesn't happen again, we need to:
CC: @kthakore
Checklist:
.cargo/config
similar to examples/sine/rune-rs/.cargo/config
examples/boilerplate
MODEL
instruction and write bytes to model.rs fileCAPABILITY
instruction to generate new codeOutput in proc_block/microspeech_agg/lib.rs
thread 'tests::throttling' panicked at 'attempt to add with overflow'
Output with actual data:
<MISSING>
I get this error lib/x86_64-linux-gnu/libm.so.6: version
GLIBC_2.29' not found (required by ./rune)`
When using the nightly rune prebuilt executable
We need to add a new method to Environment
for retrieving the most recent accelerometer samples, then make sure it gets called from the Context
.
(this would implement the "Capabilities implementation" part of #31)
Make the microspeech rune:
rune build
rune run
Modify/Allocate required memory size as needed so we never run into out of bounds memory error.
Make it like a rune.json file (similar to package.json).
As part of #100 I found that compiling the proc_blocks/python/
code fails to link to libpython
on a Mac. For now I've skipped generating the Python bindings on Mac, but ideally we'd figure out why it fails to link.
"_PyBytes_AsString", referenced from:
pyo3::types::string::_$LT$impl$u20$pyo3..conversion..FromPyObject$u20$for$u20$alloc..string..String$GT$::extract::h9638ede2048f915d in libpyo3-50db6c8a19c7c585.rlib(pyo3-50db6c8a19c7c585.pyo3.43lgczmm-cgu.12.rcgu.o)
pyo3::types::string::PyString::to_string_lossy::hd5324a2f0ea4480f in libpyo3-50db6c8a19c7c585.rlib(pyo3-50db6c8a19c7c585.pyo3.43lgczmm-cgu.14.rcgu.o)
pyo3::types::string::_$LT$impl$u20$pyo3..conversion..FromPyObject$u20$for$u20$$RF$str$GT$::extract::hfb31b7a3a75da54d in libpyo3-50db6c8a19c7c585.rlib(pyo3-50db6c8a19c7c585.pyo3.43lgczmm-cgu.14.rcgu.o)
"_PyErr_Restore", referenced from:
proc_blocks::normalize::__init2035879285165519677::__wrap::h34fbedbf00b614f2 in proc_blocks.proc_blocks.7y4hrf10-cgu.10.rcgu.o
proc_blocks::normalize::__init2035879285165519677::__wrap::h492ab7dcc548bacd in proc_blocks.proc_blocks.7y4hrf10-cgu.10.rcgu.o
proc_blocks::fft::__init8185045372208238284::__wrap::hac9ea221cb21e5b4 in proc_blocks.proc_blocks.7y4hrf10-cgu.5.rcgu.o
proc_blocks::fft::__init8185045372208238284::__wrap::hdc12c9f375452fc8 in proc_blocks.proc_blocks.7y4hrf10-cgu.5.rcgu.o
proc_blocks::fft::__init8185045372208238284::__wrap::h0d8eb61d3689dea4 in proc_blocks.proc_blocks.7y4hrf10-cgu.5.rcgu.o
proc_blocks::fft::__init8185045372208238284::__wrap::hfa8deb03b803ee11 in proc_blocks.proc_blocks.7y4hrf10-cgu.5.rcgu.o
_PyInit_proc_blocks in proc_blocks.proc_blocks.7y4hrf10-cgu.8.rcgu.o
...
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
See the "Build release artifacts for macos-latest" logs: logs_335.zip
As part of our overall plan for the project, we want to run Runes in the web browser. It should be simple enough to port our runtime to wasm-bindgen (e.g. rune-web-runtime
), the problem is that we currently use TensorFlow Lite models when libraries like Tensorflow.js use full TensorFlow models.
The rune-web-runtime
will be able to call into JavaScript, so we have access to both Rust and JavaScript libraries.
Tensorflow.js
and convert TensorFlow Lite to TensorFlow models on the fly or ahead of time@meelislootus and @kthakore, let me know if there are some alternatives I've missed.
I'm trying to figure out the syntax for a Runefile where each stage might have multiple inputs and multiple outputs, and I was hoping to get some input.
As an example, let's do some DSP preprocessing which takes data from the audio
capability and passes it through a fft
preprocessor. Then on the other side we have samples from the accelerometer
being passed through a normalize
step to make the data independent of how far/hard you move the device.
Next we take the outputs from fft
and accelerometer
and send them to a model
. This model sends a list of probabilities to the label
proc block which then turns it to a UTF-8 string that gets sent to the main output
.
We've also got a debug
output which taps into fft
, model
, and label
and you may want to disable independently.
That's a lot of words, but we're essentially trying to construct this pipeline:
One syntax idea is to use --input some_stage
and --input-type f32[42]
for declaring a stage's input(s), with the corresponding --output some_stage
and --output-type i8[1960]
for specifying the output stage and its type. Under this model you can have multiple inputs but a single output (but that output may be copied to multiple stages).
This means we'd stop using angle brackets for input/output types because I think it'd be hard to adapt the PROC_BLOCK<input_type, output_type>
syntax to multiple inputs/outputs without it being ugly or turning into punctuation soup (e.g. PROC_BLOCK<(i32[1920, 1080], f32[128, 3]), f32[4]> model ...
).
FROM runicos/base
CAPABILITY audio SOUND --hz 16000 --output-type I16[16000]
CAPABILITY accelerometer ACCEL -n 128 --output-type F32[128, 3]
PROC_BLOCK fft hotg-ai/rune#proc_blocks/fft \
--input audio --input-type I16[16000] \
--output-type I8[1960]
PROC_BLOCK normalize hotg-ai/rune#proc_blocks/normalize \
--input accelerometer --input-type f32[128, 3] \
--output-type f32[128, 3]
MODEL model ./model.tflite --input audio \
--input fft \
--input normalize --input-type f32[128, 3] \
--output-type f32[4]
PROC_BLOCK label hotg-ai/rune#proc_blocks/ohv_label \
--labels=Wing,Ring,Slope,Unknown \
--input model \
--output-type UTF8
OUT output SERIAL --input fft --input model --input label
OUT debug SERIAL --input label
That would be a relatively minor change to the language syntax and probably take a couple hours of fiddling in the parser. Unfortunately it makes a Runefile context sensitive (a --input-type argument can only follow an --input and some arguments are only specific to certain commands) which isn't nice from a language design standpoint.
Alternatively, instead of writing our own DSL we could leverage an existing one (e.g. Lua with our own custom functions):
audio = capability {
kind = "SOUND",
output_type = i16(16000),
args = { hz = 16000 },
}
accelerometer = capability { kind = "ACCEL", output_type = f32(128, 3) }
normalize = proc_block {
source = "hotg-ai/rune#proc_blocks/normalize",
input = accelerometer,
output_type = f32(128, 3),
}
model = model {
filename = "./model.tflite",
input = { audio, accelerometer },
output_type = f32(4),
}
label = proc_block {
source = "hotg-ai/rune#proc_blocks/ohv_label",
input = model,
output_type = utf8,
args = {
labels = { "Wing", "Ring", "Slope", "Unknown" },
},
}
output = out { kind = "SERIAL", input = { label } }
debug = out { kind = "SERIAL", input = { fft, model, label } }
I'd maybe consider the above for a Runefile v3 seeing as it's quite a large change.
This is a tracking issue for letting users construct more expressive pipelines in their Runefiles, specifically:
Some optional additions to this would be:
rune graph
command which takes a Runefile and generates a visual diagram of the pipeline, complete with input/output typesEasier for testing and potentially using directly in Dart
The microspeech Runefile expects to receive 24,000 samples of audio at 16kHz, but yes_01d22d03_nohash_0.wav
and friends are only 16,000 samples long.
This isn't a problem when you compile the Rune in release mode (rune build Runefile
), but when compiled in debug mode (rune build Runefile --debug
) we trigger this assertion.
[2021-03-25T18:30:19.671Z INFO ] panicked at 'assertion failed: `(left == right)`
left: `32000`,
right: `48000`', /home/michael/Documents/hotg-ai/rune/runic-types/src/wasm32/mod.rs:92:9
Error: Call failed
Caused by:
0: Unable to call the _call function
1: RuntimeError: unreachable
at core::core_arch::wasm32::unreachable::hb8a7ba5af00cd3dd (<module>[1099]:0x3d931)
at rust_begin_unwind (<module>[1051]:0x3c101)
at core::panicking::panic_fmt::hfa15f5472ef5e557 (<module>[1428]:0x4aed4)
at core::panicking::assert_failed_inner::h1ff1547b4e20ab23 (<module>[1442]:0x4baf6)
at core::panicking::assert_failed::h4a336faee37010c7 (<module>[968]:0x3786c)
at runic_types::wasm32::copy_capability_data_to_buffer::h3df68d48e80f298c (<module>[225]:0x97da)
at <runic_types::wasm32::sound::Sound<_> as runic_types::pipelines::Source>::generate::h4d22008f54108f0a (<module>[125]:0x578a)
at microspeech::_manifest::{{closure}}::h31e7eddd059d1b5c (<module>[107]:0x4300)
at <alloc::boxed::Box<F,A> as core::ops::function::FnMut<Args>>::call_mut::he6db486207a4a9a2 (<module>[55]:0x26c0)
at _call (<module>[127]:0x5e9a)
2: unreachable
However, if you update the capability and proc blocks in the Runefile to work with a I16[16000]
(so our debug assertion is happy), the model now thinks our "yes" example is "silence" and tests start failing. I'm guessing all those trailing zeroes make a difference once they go through the FFT.
@meelislootus do you have any idea what's going on here?
Make microspeech working on the example collab
Closer need a few more days + Blog posts
When runic_pb_fft::Processor
implements the runic_types::proc_block::ProcBlock
trait it adds runic_types
as a git dependency.
Like any normal trait, when we try to use the Processor
's process()
method inside runefile-parser
tests we get the following compile error:
error[E0599]: no method named `process` found for struct `Processor` in the current scope
--> runefile-parser/src/lib.rs:38:20
|
38 | return fft.process(waveform, map);
| ^^^^^^^
|
= help: items from traits can only be used if the trait is in scope
= note: the following trait is implemented but not in scope; perhaps add a `use` for it:
`use runic_types::proc_block::ProcBlock;`
Makes sense, we need to pull in the runic_types::proc_block::ProcBlock
trait before we can use it... But that doesn't work.
It looks like runic-pb-fft
has added runic-types
(from the hotg-ai/rune
repository) as a git dependency and the runic_types::proc_block::ProcBlock
that we use inside runefile-parser
comes from ../runic-types
. Cargo sees these two as completely different crates, meaning we are importing the wrong ProcBlock
.
The two options I see are:
runic-pb-fft
inside the rune
repository so we aren't using git dependencies, orrunic-types
into its own repository and make both runefile-parser
and runic-pb-fft
depend on it using git dependencies, probably also pinning the commit (e.g. runic-types = { git = "...", rev = "abcd1234" }
)Considering we need rustc
installed on the user's PC so we can compile things anyway (cargo
just shells out to it), what benefit do we gain by pulling in cargo
as a library instead of invoking it via std::process::Command
?
Dropping the cargo
dependency will probably halve the crate's build time and remove a bunch of complexity.
Wasmer provides a way to cache (Module::cache()
) WebAssembly modules after they are compiled, letting you skip that slow (and memory intensive) compilation process on startup.
This is something we may want to investigate after we can build+run the sine
, gesture
, and microspeech
runes.
See also: https://medium.com/wasmer/running-webassembly-100x-faster-%EF%B8%8F-a8237e9a372d
In today's sync with @kthakore, @Mi1ind, and @meelislootus, Meelis said that in the ML community you would handle streamed data by constructing/preprocessing your inputs to only ever be the size you care about.
Say you were implementing fall detection in real time on the phone I'm guessing you'd set the app up so that the Rune will be invoked periodically (e.g. every second). Then as part of initializing the accelerometer capability you'll ask to receive the last 500 samples, sampling at a sample rate of (for example) 100 Hz.
It may look something like this in a Runefile:
CAPABILITY<f32[500]> accelerometer ACCEL --sample-rate 100
(note: the number of samples is determined by the capability output dimensions)
On the PC side (rune run
) you would provide the CLI with a list of accelerometer samples and the runtime could be told to run for X seconds. When we ask a capability for data we would tell it how many seconds have passed since we started and using the sample rate the capability could figure out which 500 samples to send back to the Rune.
Need to make these configurable
Make the runefile and any post proc blocks.
As part of #100 I tried enabling the Windows build but it failed when tflite
tries to use bindgen
to generate headers for TensorFlow Lite.
See the logs from our "Build release artifacts for windows-latest" job for more: logs_341.zip
We accidentally broke master
when #51 was merged because the fft
proc block tries to pull in sonograph
(a private hotg-ai repo) as a git dependency.
/usr/share/rust/.cargo/bin/cargo check --workspace --verbose
Updating crates.io index
Updating git repository `ssh://[email protected]/hotg-ai/sonogram`
error: failed to get `sonogram` as a dependency of package `FFT v0.1.0 (/home/runner/work/rune/rune/proc_blocks/fft)`
Error: failed to get `sonogram` as a dependency of package `FFT v0.1.0 (/home/runner/work/rune/rune/proc_blocks/fft)`
Caused by:
failed to load source for dependency `sonogram`
Caused by:
Unable to update ssh://[email protected]/hotg-ai/sonogram#39d4c460
Caused by:
failed to clone into: /home/runner/.cargo/git/db/sonogram-b77ca210ffbc4b64
Caused by:
failed to authenticate when downloading repository
* attempted ssh-agent authentication, but no usernames succeeded: `git`
if the git CLI succeeds then `net.git-fetch-with-cli` may help here
https://doc.rust-lang.org/cargo/reference/config.html#netgit-fetch-with-cli
Caused by:
error authenticating: no auth sock variable; class=Ssh (23)
To let CI use other private repos we need to:
webfactory/ssh-agent
action to benchmarks.yml
and main.yml
which uses that SSH keyAfter that, SSH authentication should work and CI will pass again.
If it doesn't, we might need to tell cargo
to shell out to git
instead of using the compiled-in libssh2
by setting git-fetch-with-cli = true
under the [net]
table in .cargo/config
.
We may also want to set up GitHub's protected branches so nobody can push directly to master
and CI needs to pass before a PR can be merged.
At the moment, if you pass an unsupported parameter to a capability, the capability will error out (which aborts the Rune).
Instead, we should probably accept the capability (as a noop) and emit a warning to let the user know it isn't supported.
Feedback:
Mini Release:
Onboarding Session
Some long-form tutorials that we can publish to tinyVerse:
We found an issue where running the microspeech
Rune in an infinite loop will keep consuming memory until we've exhausted WebAssembly's 32-bit address space.
Steps to reproduce:
$ git checkout ea982ab
$ cargo --version --verbose
cargo 1.52.0-nightly (90691f2bf 2021-03-16)
release: 1.52.0
commit-hash: 90691f2bfe9a50291a98983b1ed2feab51d5ca55
commit-date: 2021-03-16
$ rustc --version --verbose
rustc 1.52.0-nightly (36f1f04f1 2021-03-17)
binary: rustc
commit-hash: 36f1f04f18b89ba4a999bcfd6584663fd6fc1c5d
commit-date: 2021-03-17
host: x86_64-unknown-linux-gnu
release: 1.52.0-nightly
LLVM version: 12.0.0
$ cargo rune build examples/microspeech/Runefile
[2021-03-18T10:15:13.110Z DEBUG rune::build] Parsing "examples/microspeech/Runefile"
[2021-03-18T10:15:13.111Z DEBUG rune::build] Compiling microspeech in "/home/michael/.cache/runes/microspeech"
[2021-03-18T10:15:13.114Z DEBUG rune_codegen] Executing "cargo" "build" "--target=wasm32-unknown-unknown" "--quiet" "--release"
[2021-03-18T10:15:14.062Z DEBUG rune::build] Generated 58231 bytes
$ cargo rune run examples/microspeech/microspeech.rune --capability sound:examples/microspeech/data/no_b66f4f93_nohash_8.wav --repeats 1000000000
Able to have rune CLI to execute web assembly
Later on is the runefile
Make rune work on Browser with all 4 runes.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.