pieterpenninckx / rsynth Goto Github PK
View Code? Open in Web Editor NEWA crate for developing audio plugins and applications in Rust.
License: Other
A crate for developing audio plugins and applications in Rust.
License: Other
Add middleware that implements EventHandler<MidiEvent>
and EventHandler<Timed<MidiEvent>>
by mapping the midi event, which is just raw bytes, onto the struct defined by the wmidi crate.
Currently, the render_buffer
method is generic over the floating-point type.
It's probably problematic to allow for SIMD, so this will probably be changed when we have a more clear understanding of how portable SIMD is going to look like in stable Rust.
Version 0.2.0 can make some breaking changes, so this is a list of things I would like to change before version 0.2.0:
AudioWriter::number_of_channels
ContextualEventHandler
instead of EventHandler
AudioChunk
more clean.CommonMidiPortMeta
trait so that no memory allocation is (i.e. do not require to return a String
).name
from CommonPluginMeta
should not just return a slice.Writing middleware that adds something to the "context" requires a lot of boilerplate (see src/middleware/frame_counter.rs
for instance).
Add support for LV2 back-end via the rust-lv2
crate.
Currently, only one MIDI input port is supported. E.g. Jack supports more than one MIDI-port, so this can be improved.
vst-rs
has recieved a new version. Time for an upgrade!
Note: version 0.2.0 of vst-rs
fixed some soundness issues that will break rsynth
's code. You will need to write some unsafe
code for this upgrade. Be warned!
Note: this problem has already been solved for the jack
backend, you can have a look at src/backend/jack_backend.rs
for inspiration.
JACK can also support digital CV, would be neat to see support for this in rsynth.
It would be nice to have middleware that splits the input buffers and output buffers on timed events, so that the plugins down the chain can always assume that an event happens at the beginning of a buffer. This makes it easier to write plugins with sample-accurate timings.
Using vectors like for tracking keys pressed is a bad idea and makes the library inefficient.
In order to clarify the scope of this crate, polyphony should be split off in a separate crate. Envelopes are quite embryonic and can simply be deprecated.
What is obvious for one person is not necessarily obvious for somebody else. So it would be good if pull requests, including mine, would be reviewed. Here by reviewing I mean "going to the diffs to see if there is something suspicious".
If you would like to review pull requests, you can reply to this issue and then I'll make sure to give you some time when there's a pull request that I think needs some review.
There's currently only support for hound
for reading and writing files. Add support for using audrey
The current voice stealing algorithm is very bare-bones and can definitely be improved.
This issue requires some knowledge on voice stealing algorithms. But frankly, there's enough low-hanging fruit to gain some improvements, even without deep knowledge on voice stealing.
This issue requires some knowledge about voice stealing algorithms.
The crate currently uses enum variants to distinguish various event types. This is hard to extend (e.g. in custom middleware).
Hi! I'm just wrapping my head around writing my first plug-in. I wonder if it would be possible to provide an example that wasn't just noise - perhaps a sine wave synthesizer?
The random
function in the example test_synth.rs
is causing significant overhead. A better approach would be to create a one-time sample beforehand and play it over in the audiobuffer.
There are some cases where a plugin may want to borrow (in the sense of Rust's borrowing system) some "context" from middleware higher up the chain:
It would be good to have this.
Pointer to the source code: src/backend/jack_backend.rs
, look for TODO: SysEx event
.
An example is the following piece of code:
#[cfg(feature = "stable")]
impl<Event, E, C, T, GenericEvent, Context> EventHandler<GenericEvent, Context>
for AfterTouchMiddleware<Event, E, C, T>
where
GenericEvent: Specialize<Timed<Event>>,
Event: AfterTouchEvent,
for<'a> E: Envelope<'a, T, EventType = Timed<u8>>,
for<'ac, 'cc> C: EventHandler<GenericEvent, EnvelopeContextWrapper<'ac, 'cc, Context, E>>
+ EventHandler<Timed<Event>, EnvelopeContextWrapper<'ac, 'cc, Context, E>>,
See also this tread for a more in depth discussion.
TryStop
to backend/mod.rs
with a method stop
. This trait is mainly for making it easier to use the same code for different backends. Calling the stop
method stops the host, if supported.Stop
that "inherits" from TryStop
This trait indicates that stopping the host is supported.Stop
trait for JackHost
by adding a field control
of type jack::Control
to the JackHost
struct, initializing it to Continue
and setting it to Quit
in the stop
method.process
method for JackHost
by returning self.control
instead of always Control::Continue
Add support for using CPAL as a backend.
with the new crate system, documentation isn't working:
error: manifest path `/home/travis/build/resamplr/rsynth/Cargo.toml` contains no package: The manifest is virtual, and the workspace has no members.
Okay, this is a broad issue. rsynth
currently has some basic support for envelopes, but I feel like it could be improved. This will probably involve some architectural decisions, so feel free to ask me questions. Also, I don't expect this to be solved with one pull request, so feel free to already open a pull request for a small improvement, even if this would not close this issue.
Working on this issue requires some knowledge about real-time audio processing.
The examples in the folder src/examples
still implement the meta-data traits "by hand". This can be improved by implementing the Meta
trait instead. Showcase how easy it is now to define the meta-data!
Note: the vst-specific meta-data probably still needs to be implemented "by hand" right now.
It would be nice to be able to create an application that takes a MIDI input and generates a WAV output just by choosing another back-end. It would also be nice to be able to render "in-memory", to be used in integration tests.
We can also provide some test functions that test for common pitfalls:
sample = value
instead of sample += value
), thus overwriting what other voices are doingsample += value
), but the output is not initialized to 0
The README.md states that rsynth
is licensed "under the MIT/BSD-3 License", whereas the Cargo.toml
file states "BSD-3-Clause".
@piedoom , which was your intention? Also, is it Ok if I re-lisence it under MIT/Apache V2? Edit: I also would like to add the following to the README: For the application of the MIT license, the examples included in the doc comments are not considered "substantial portions of this Software". Is that ok for you as well?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.