Coder Social home page Coder Social logo

dbraun / dawdreamer Goto Github PK

View Code? Open in Web Editor NEW
819.0 30.0 63.0 490.5 MB

Digital Audio Workstation with Python; VST instruments/effects, parameter automation, FAUST, JAX, Warp Markers, and JUCE processors

License: GNU General Public License v3.0

C 24.70% C++ 71.13% Objective-C++ 2.89% Java 0.47% Objective-C 0.09% CMake 0.01% Makefile 0.07% Dockerfile 0.01% Python 0.57% Faust 0.05% Shell 0.01%
juce audio python daw vst audio-plugin audio-processing faust vst3 vst3-host

dawdreamer's People

Contributors

auryd avatar dbraun avatar dependabot[bot] avatar guillaumephd avatar malekinho8 avatar sainttttt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dawdreamer's Issues

Linux Dockerfile

Anyone interested in spending couple of hours in the afternoon on a call doing a screen share trying to figure out a docker container to build/run this?

macOS: missing symbol error on first attempt to run

I following the README, compiled successfully (I'm using homebrew python instead), renamed the resultant dylib, applied the install_name_tool, renamed and moved the libfaust library and put them together for the first run and I got the following error:

Python 3.9.6 (default, Jun 29 2021, 06:20:32)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.25.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import dawdreamer as daw
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-1-1bf2d360b931> in <module>
----> 1 import dawdreamer as daw

ImportError: dlopen($HOME/Downloads/dawdreamer.so, 2): Symbol not found: _kMIDIPropertyProtocolID
  Referenced from: $HOME/Downloads/dawdreamer.so (which was built for Mac OS X 11.1)
  Expected in: /System/Library/Frameworks/CoreMIDI.framework/Versions/A/CoreMIDI
 in $HOME/Downloads/dawdreamer.so

So it's not finding _kMIDIPropertyProtocolID which means that CoreMIDI should have been included as a dependency. But I checked in the Xcode project and it was already (see attached screenshot). Any Suggestions on how to fix this?

frameworks

S

Weird rendering time consuming / midi rendering efficiency with Kontakt

Hallo,

I encounter some rendering time consuming problem. In my own implement code, I wanted to render a midi file of approximately 4 minitues. And it just stucks at the engine.render(end_time) step. So I tested with following appended code for different intended rendering time length, and found that:

  • when render for 5 seconds, rendering takes about 2 seconds
  • when render for 10 seconds, rendering takes about 11 seconds
  • when render for 20 seconds, rendering takes about 44-47 seconds

So, if render for some 300 seconds, the render time consuming would blow up and it simply looks like the program stucks.

I also tried with a real midi file which contains not so much "blank area", but got the same effect.

What could have gone wrong?

code:

import numpy as np
import os
import pretty_midi
import soundfile as sf
import time

import dawdreamer as daw

SAMPLE_RATE = 44100
BUFFER_SIZE = 512
SYNTH_PLUGIN = os.path.abspath('testVST/DSK The Grand.dll') # plugin path

print('making engine...')
engine = daw.RenderEngine(SAMPLE_RATE, BUFFER_SIZE)

print('making processor...')
synth = engine.make_plugin_processor("synth", SYNTH_PLUGIN)

graph = [
    (synth, []),
]

synth.add_midi_note(60, 127, 0., 1.)  # note, velocity, start time sec, duration sec
synth.add_midi_note(67, 127, 0.5, .25)
# midi_path = os.path.abspath('MIDI/Track043_singleTrack.mid')
# pm = pretty_midi.PrettyMIDI(midi_path)
# end_time = pm.get_end_time()
# print('end time: {}'.format(end_time))
# synth.load_midi(midi_path)
print('loading graph...')
assert(engine.load_graph(graph))
print('rendering...')
start = time.time()
engine.render(10.)    # test e.g. 5, 10, 20... seconds
end = time.time()
print('Rendering time {}'.format(end - start))
audio = engine.get_audio()
audio = np.array(audio, np.float32).transpose()
sf.write('test_synth_basic.wav', audio, SAMPLE_RATE)

print("All Done!")

Memory leak from creating multiple engines/graphs/processors!

This line is almost certainly causing a memory leak

myNode.get()->incReferenceCount();

But without it, I can't call myMainProcessorGraph->clear() in subsequent calls to loadGraph

I'm running this script and watching the memory footprint grow in Task Manager

import librosa
import dawdreamer as daw
import os
SAMPLE_RATE = 44100
song_path = "song.wav"
vst_effect_path = "path/to/vst.dll"

engine = daw.RenderEngine(SAMPLE_RATE, 512)
def load_audio_file(file_path, duration=None):
    sig, rate = librosa.load(file_path, duration=duration, mono=False, sr=SAMPLE_RATE)
    assert(rate == SAMPLE_RATE)
    return sig

vocals = load_audio_file(song_path, duration=5.)

import random

while True:
    graph = [
        (engine.make_playback_processor("vocals", vocals), []),
        (engine.make_plugin_processor("reverb", vst_effect_path ), ["vocals"])
        ]
    assert(engine.load_graph(graph))
    # engine.clear()
    os.system("clear")
    # del graph
    print(random.random())

The solution will involve knowledge from https://pybind11.readthedocs.io/en/stable/advanced/functions.html and modifications in source.cpp

.def("make_oscillator_processor", &RenderEngineWrapper::makeOscillatorProcessor, py::return_value_policy::take_ownership)

I'm also open to the idea of using the with statement but only if it solves the problem.

Something like...

# create the processors
# proc1 = ...
# proc2 = ...

graph = [
    (proc1, []),
    (proc2, ["proc1"])
]

with engine.load_graph(graph):
    for i in range(10):

        # modify stuff based on i
        proc1.some_param = i
        proc1.load_midi("song"+str(i)+".mid")

        engine.render(3.)

iPlug2 backend

I got your attention right? Well, as a dreamer of DawDreamer maybe we could at least start with a discussion where we consider iPlug2 for a backend here.

Having two independent audio engines would promote independent thought gathering on abstraction of daw activities and promote vendor-agnostic isolations?

use xcconfig file to facilitate homebrew python build on macOS

May i suggest to use a xcconfig file to allow for build variations in macOS (such as using Hombrew python). See an example of use here in one my projects.

Otherwise, I am forced to modify the Xcode file directly with this recipe instead of just modifying an xcconfig file.

To Compile DawDreamer Xcode using Homebrew Python 3.9.x

  1. create JUCER xcode project for macos

  2. from the output of python3-config --cflags add
    -I/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.9/include/python3.9
    to HEADER_SEARCH_PATHS

  3. from the output of python3-config --ldflags add
    -L/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.9/lib/python3.9/config-3.9-darwin
    to LIBRARY_SEARCH_PATHS

  4. change -lpython to -lpython3.9 in OTHER_LDFLAGS (Other Linker Flags)

DawDreamer Can't work for Addictive drums2

Hello

I rendered the Addictive drums2 through DawDreamer.
Result -> Render works, but no sound
Is it solvable?
We think preset is not connecting with samples that Addictive drums2 originally give.

Thanks
KB

Linux build is broken

I'm working on it. The issue is that the Projucer is compiling the Faust and libresample sections even though I meant to exclude them by using preprocessor macros. My suggestion to anyone who wants Linux is to try an earlier build such as 6e086f5

Segmentation Fault when calling plugin.load_vst3_preset()

Hi!

I tried to load a VST3 Preset from a file for a plugin using the load_vst3_preset() method but calling the method results in a crash of my program.
The file path should be correct because if I enter an invalid path an according error message is printed instead of the SEGFAULT.
I tried it with Arturia Piano V2 (.piax - File), Arturia Stage 73 (.stax - File) and TAL NoiseMaker (.noisemakerpreset - File).

Am I using the function wrong or did I find a bug?
Here's my code if it helps: https://github.com/JonathanDotExe/midi-cube-tools/blob/main/SampleTool/main.py

Thanks in advance,
Jonathan

Reusable "subgraphs" like audio effect racks or instrument racks.

How can users create reusable audio processor racks (like audio effect racks in Ableton) without diving into C++ code? I want to avoid exposing a JUCE AudioProcessorGraph as a processor itself that can be nested within other AudioProcessorGraphs. So here's an idea that involves no changes to the current C++ code:

import numpy as np
from scipy.io import wavfile
import librosa

import dawdreamer

SAMPLE_RATE, BUFFER_SIZE = 44100, 512
VOCALS_PATH = "vocals.wav"

engine = dawdreamer.RenderEngine(SAMPLE_RATE, BUFFER_SIZE)

def load_audio_file(file_path, duration=None):
	sig, rate = librosa.load(file_path, duration=duration, mono=False, sr=SAMPLE_RATE)
	assert(rate == SAMPLE_RATE)
	return sig

def make_custom_subgraph(engine, name, wet, inputs):
	# A graph is a list of tuples.
	# This function will return a list of tuples (a subgraph),
	# that the user will unpack with * and combine with other graphs.

	# We will send the input signal to a 100% wet delay
	# and then put that signal through reverb.
	# Then we will mix the input signal with that reverb signal.
	# Note the use of the * operator in front of "inputs".
	# This operator unpacks a list into args.

	# todo: the naming of the graphs is messy. Making the uniqueness can be enforced
	#   with a singleton class?

	sub_graph = [
		(engine.make_delay_processor(name+"_delay", "linear", 200., 1.0), inputs),
		(engine.make_reverb_processor(name+"_reverb", .5, .5, .33, .5, 1.), [name+"_delay"]),
		(engine.make_add_processor(name, [1.-wet, wet]), [*inputs, name+"_reverb"])
	]

	return sub_graph


vocals_data = load_audio_file(VOCALS_PATH, duration=5.)

vocals_processor = engine.make_playback_processor("my_vocals", vocals_data)

# Note the use of the * operator in front of make_custom_subgraph.
# make_custom_subgraph does NOT return a processor. It returns a "subgraph"
wet_par = .15
graph = [
	(vocals_processor, []),
	*make_custom_subgraph(engine, "rack", wet_par, ["my_vocals"]),
	(engine.make_compressor_processor("my_compressor"), ["rack"])
]

assert(engine.load_graph(graph))

engine.render(3.)
audio = engine.get_audio()
audio = np.array(audio, np.float32).transpose()
wavfile.write('composite_example.wav', SAMPLE_RATE, audio)

print('Done!')

Does this look good enough as a technique, or do you, the user, want the ability to make subprocessor graphs?

# This doesn't work because the dawdreamer C++ hasn't implemented these functions
def make_custom_subgraph_processor(engine, name, wet):

        # https://docs.juce.com/master/tutorial_audio_processor_graph.html
        # Look for "AudioGraphIOProcessor::audioInputNode" in that tutorial.
        # When we later call engine.make_subgraph_processor(sub_graph)
        #     we will have to intelligently connect audioInputNode to the right nodes.
        # input_token will represent audioInputNode in the meantime...
        input_token = ["SPECIAL_AUDIO_INPUT_TOKEN"]

	sub_graph = [
		(engine.make_delay_processor(name+"_delay", "linear", 200., 1.0), inputs),
		(engine.make_reverb_processor(name+"_reverb", .5, .5, .33, .5, 1.), [name+"_delay"]),
		(engine.make_add_processor(name+"_mix", [1.-wet, wet]), [*input_token, name+"_reverb"])
	]
        
        # this function will need to find "SPECIAL_AUDIO_INPUT_TOKEN" in order to properly build a graph
        subgraph_processor = engine.make_subgraph_processor(name, sub_graph)

	return subgraph_processor 

wet_par = .15
graph = [
	(vocals_processor, []),
	(make_custom_subgraph_processor(engine, "rack", wet_par), ["my_vocals"]),
	(engine.make_compressor_processor("my_compressor"), ["rack"])
]
engine.load_graph(graph)

Maybe it'll be a feature in the future but try out the pure-python approach in the short term :)

Need a sampler processor instrument

We need a new class very similar to PluginProcessor / engine.get_plugin_processor. It would allow you to play a sample at various MIDI pitches and might offer controls over attack/decay/sustain/release/filtering etc.

import librosa
def load_audio_file(file_path, duration=None):
    sig, rate = librosa.load(file_path, duration=duration, mono=False, sr=SAMPLE_RATE)
    assert(rate == SAMPLE_RATE)
    return sig

synth_sound = load_audio_file("cool_synth_hit.wav")
sampler_processor = engine.get_sampler_processor("my_sampler", synth_sound)
sampler_processor.attack = 2. # attack in milliseconds
# do things similar to PluginProcessor
sampler_processor.load_midi("song.mid")

# add sampler_processor to a graph and render...

update: use this? https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h

Ability to save/store graph state

I would love a way to store a graph in a lightweight way for reproducibility. By lightweight, I mean perhaps an object that stores metadata on the graph topology, each plugin, each setting for each plugin, etc, such that it can persist in memory or on disk. So there would need to be accompanying save/load methods. Here's an example that I could imagine (using some of the demo code):

import dawdreamer as daw

SAMPLE_RATE = 44100
BUFFER_SIZE = 512
SYNTH_PLUGIN = "C:/path/to/synth.dll"  # .vst3 files work too.
SYNTH_PRESET = "C:/path/to/preset.fxp"
REVERB_PLUGIN = "C:/path/to/reverb.dll"  # .vst3 files work too.

engine = daw.RenderEngine(SAMPLE_RATE, BUFFER_SIZE)


# Make a processor and give it the name "my_synth", which we must remember later.
synth = engine.make_plugin_processor("my_synth", SYNTH_PLUGIN)
synth.load_preset(SYNTH_PRESET)
synth.set_parameter(5, 0.1234)  # override a specific parameter.
synth.load_midi("C:/path/to/song.mid")

graph = [
  (synth, []), 
  (engine.make_reverb_processor("reverb"), ["my_synth"]), 
]

engine.load_graph(graph)

state = engine.save_state()

Then state would look like:

>>> state
{
    'graph': [
        'my_synth0': {
            'type': 'plugin_processor', 
            'path': 'C:/path/to/synth.dll', 
            'preset': 'C:/path/to/preset.fxp',
            'source': True,
            'sink': False,
            'parameters': [{'num': 5, 'value': 0.1234}]
        },
        'reverb_processor1': {
            'type': 'reverb_processor', 
            'path': 'C:/path/to/reverb.dll', 
            'preset': None,
            'source': True,
            'sink': False,
            'parameters': []
        },
        'sink2': {
            'type': 'sink',
            'path': None,
            'preset': None,
            'source': False,
            'sink': True,
            'parameters': []
        }
    ],
    'routing': {
        'my_synth0': 'reverb_processor1',
        'reverb_processor1': 'sink2'
    },
    'sample_rate': 44100,
    'buffer_size': 512

}

Obviously there will be things that I'm missing here, but the point is that once you have a state object/dict/whatever, you can just go:

engine = daw.load_engine_from_state(state)

and you get the same exact engine as before (assuming none of the paths have changed).

Handling multi-track MIDI files

Many MIDI files can contain more than one instrument track. I think there needs to be a plan for handling these types of files. Looking at this function, it looks like all of the MIDI tracks get rendered with the same graph, which might be problematic if the user doesn't know or expect there to be more than one track. For my use case, it's hard to know beforehand how many tracks are in the MIDI file.

Currently, I've been using pretty_midi and implementing the logic myself in python to do this. But the thing that sucks about using pretty_midi in this way is that it's a huge pain in the butt to use for multitracks if you just want one. Unless there's a better way, what I've been doings is:

  1. Load multitrack MIDI into a PrettyMIDI object
  2. Make a copy of the object with only the 1 track you want
  3. Save to disk
  4. Reload the MIDI with RenderMan (now DawDreamer)

Which is a very heavy and slow process. I would love a better solution to this.

One solution that would greatly improve my workflow would be if there was a way to "blow up" the MIDI file once it's loaded. Similar to pretty_midi, which gives you a nice list of Instrument objects, which can then be manipulated individually, that might be nice to expose in the API. So a solution I would like to see would be one that can give me a list of MIDI track objects in memory that I can send each through a graph individually.

Automatically determine render length

Hi and thank you very much for this great evolution from RenderMan!

Most DAWs (all?) compute the final length of the rendered output based on the audio/MIDI input and the "tail" of any subsequent effects or modifiers. It would be great to not have to provide the exact number of seconds to render, but simply call engine.render() and let DawDreamer estimate the correct length automatically.

EDIT: Alternatively, a method that returns the computed length of the current graph would suffice, I guess. Wouldn't feel as polished, though.

faust_processor.compile() results in segmentation fault

I've been trying to get this working for a miniconda environment in LInux.
Since I did not have sudo privilege, I had to build FAUST from source and install it locally.
I edited the header and library search paths of DawDreamer.jucer to point at the miniconda python and Faust, and the installation went alright.

So far, it seems to work for sampling VSTs like Dexed, but fails to do anything Faust related.
Initializing a Faust processor is fine, but calling functions like get_parameters_description() or compile() results in a segmentation fault.
I'm guessing Faust isn't linked correctly or something, but is there any way of checking this?
Here is the header and library search paths

/home/n_masuda/miniconda3/envs/daw/include/python3.9;
../../thirdparty/pybind11/include;
../../thirdparty/faust/architecture;
../../thirdparty/faust/compiler;
../../thirdparty/libsamplerate/src;
../../thirdparty/libsamplerate/include;
../../thirdparty/rubberband;
../../thirdparty/rubberband/rubberband;
../../thirdparty/rubberband/src/kissfft;
../../thirdparty/rubberband/src;
../../thirdparty/portable_endian/include;
/home/n_masuda/miniconda3/envs/daw/lib
/home/n_masuda/usr/local/lib (this is where Faust is)
/usr/local/lib
../../thirdparty/libsamplerate/build/src

pitchbend with midiplugins

Hi,

Is it possible to add pitch-bend with instruments without read a midi file (using maybe synth.add_midi_note(68,32,0,2499, HERE_PITCHBEND) something)? Or something like this!

If not, could you add this? It would be great!

Thanks for your work!

Add parameter automation feature

How can processor parameters such as the cutoff frequency of a filter be modified over time?

First, this seems connected to the block size. You probably need a small block size, maybe 128, out of 44100 samples in order to not perceive steppiness in changes in parameters.

Second, there’s the broader issue of the coding technique to implement this. Maybe every processor class needs to be modified to be able to hold an “animation object”, which could have any number of channels/samples. In the processBlock, rather than use protected variables, the processor would use a function that would lookup values from this animation object but fall back on values for missing channels.

Should the animations come from MIDI files or be python generated?

https://forum.juce.com/t/how-does-sample-accurate-host-plugin-automation-work/2117/13

Loading VST presets that aren't FXP format

Some kinds of VSTs don't use the FXP file format. What can be done to overcome this and load them? Sometimes they're human-readable XML files, which can be parsed in Python. Other times they're not, and it would be more challenging.

Issue with pip install on macOS

First off, thanks for your work on this project! This is looking really cool, I've used RenderMan a bunch and this updated package will be very useful :)

I'm having some difficulties with the pip install on Mac OS (11.2).

Python info:

>>> import sys
>>> sys.version
'3.9.7 (default, Sep 16 2021, 08:50:36) \n[Clang 10.0.0 ]'

First, pip can't find the pypi package:

pip install dawdreamer
ERROR: Could not find a version that satisfies the requirement dawdreamer (from versions: none)
ERROR: No matching distribution found for dawdreamer

I believe this is because none of the wheels are supported on my machine? I tried downloading the wheels and installing using:

pip install dawdreamer-0.5.7.5-cp39-cp39-macosx_11_0_x86_64.whl
ERROR: dawdreamer-0.5.7.5-cp39-cp39-macosx_11_0_x86_64.whl is not a supported wheel on this platform.

So then I tried cloning the repo and doing a local pip install:

git clone https://github.com/DBraun/DawDreamer.git
pip install -e ./DawDreamer

Obtaining file:///Users/jshier/development/Libraries/DawDreamer
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  ERROR: Command errored out with exit status 1:
   command: /Users/jshier/opt/anaconda3/bin/python /Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/tmpexe4_ww7
       cwd: /Users/jshier/development/Libraries/DawDreamer
  Complete output (18 lines):
  Traceback (most recent call last):
    File "/Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 349, in <module>
      main()
    File "/Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 331, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 117, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/private/var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/pip-build-env-bqchioto/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 154, in get_requires_for_build_wheel
      return self._get_build_requires(
    File "/private/var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/pip-build-env-bqchioto/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 135, in _get_build_requires
      self.run_setup()
    File "/private/var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/pip-build-env-bqchioto/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 150, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 16, in <module>
      python_requires = "==" + os.environ['PYTHONMAJOR'] + '.*'  # set with github action
    File "/Users/jshier/opt/anaconda3/lib/python3.8/os.py", line 675, in __getitem__
      raise KeyError(key) from None
  KeyError: 'PYTHONMAJOR'
  ----------------------------------------
WARNING: Discarding file:///Users/jshier/development/Libraries/DawDreamer. Command errored out with exit status 1: /Users/jshier/opt/anaconda3/bin/python /Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/tmpexe4_ww7 Check the logs for full command output.
ERROR: Command errored out with exit status 1: /Users/jshier/opt/anaconda3/bin/python /Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/tmpexe4_ww7 Check the logs for full command output.

I saw that you set PYTHONMAJOR as an environment variable in the Github workflows:

export PYTHONMAJOR=3.9
pip install -e ./DawDreamer

Obtaining file:///Users/jshier/development/Libraries/DawDreamer
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  ERROR: Command errored out with exit status 1:
   command: /Users/jshier/opt/anaconda3/bin/python /Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/tmpsr7snsa7
       cwd: /Users/jshier/development/Libraries/DawDreamer
  Complete output (21 lines):
  python_requires: ==3.9.*
  Traceback (most recent call last):
    File "/Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 349, in <module>
      main()
    File "/Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 331, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 117, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/private/var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/pip-build-env-3icdffle/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 154, in get_requires_for_build_wheel
      return self._get_build_requires(
    File "/private/var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/pip-build-env-3icdffle/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 135, in _get_build_requires
      self.run_setup()
    File "/private/var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/pip-build-env-3icdffle/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 150, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 69, in <module>
      shutil.copy(os.path.join(build_folder, 'dawdreamer.so'), os.path.join('dawdreamer', 'dawdreamer.so'))
    File "/Users/jshier/opt/anaconda3/lib/python3.8/shutil.py", line 415, in copy
      copyfile(src, dst, follow_symlinks=follow_symlinks)
    File "/Users/jshier/opt/anaconda3/lib/python3.8/shutil.py", line 261, in copyfile
      with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
  FileNotFoundError: [Errno 2] No such file or directory: '/Users/jshier/development/Libraries/DawDreamer/Builds/MacOSX/build/Release/dawdreamer.so'
  ----------------------------------------
WARNING: Discarding file:///Users/jshier/development/Libraries/DawDreamer. Command errored out with exit status 1: /Users/jshier/opt/anaconda3/bin/python /Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/tmpsr7snsa7 Check the logs for full command output.
ERROR: Command errored out with exit status 1: /Users/jshier/opt/anaconda3/bin/python /Users/jshier/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /var/folders/d3/8t_hqbqd77n1n8y47rwyb2nm0000gn/T/tmpsr7snsa7 Check the logs for full command output.

Thanks for your help!

Linux wheel isn't on PyPI

Need to check this

# # I think the audit is failing because the build links against local LLVM-related things.
# # or https://cibuildwheel.readthedocs.io/en/stable/faq/#linux-builds-on-docker
# - name: Build wheels
# run: |
# python -m cibuildwheel --output-dir wheelhouse --platform linux
# env:
# PYTHONMAJOR: ${{ matrix.python-version }}
# CIBW_PLATFORM: linux
# CIBW_BUILD_VERBOSITY: 1
# CIBW_REPAIR_WHEEL_COMMAND_LINUX: pip install auditwheel-symbols && (auditwheel repair -w {dest_dir} {wheel} || auditwheel-symbols --manylinux 2010 {wheel})
# CIBW_TEST_REQUIRES: -r test-requirements.txt
# CIBW_TEST_COMMAND: "cd {project}/tests && pytest ."
# CIBW_ARCHS: auto64
# CIBW_SKIP: "*pp* *p36-* *p37-* *p38-* *p310-*"
# - uses: actions/upload-artifact@v2
# with:
# name: my-wheel-artifact
# path: ./wheelhouse/*.whl

Trouble with conda on macOS

python exits abnormally when importing dawdreamer with an error message similar to the following.

Fatal Python error: PyMUTEX_LOCK(gil->mutex) failed
Python runtime state: unknown

What's strange is that I've been able to successfully compile, import, and execute DawDreamer methods on Linux (Ubuntu 18.04, Python 3.6).

However, I've been running into this issue on macOS 11.2. I've tried compiling using the pybind11 commit referenced in the submodule, as well as the head of the pybind11 master branch. I've tried compiling with Python 3.6, 3.7, 3.8, and 3.9. In all cases, Python crashes when importing dawdreamer with a stack trace similar to the following.

Process:               python3.8 [15931]
Path:                  /Users/USER/*/python3.8
Identifier:            python3.8
Version:               ???
Code Type:             X86-64 (Native)
Parent Process:        zsh [81315]
Responsible:           iTerm2 [459]
User ID:               660792939

Date/Time:             2021-02-05 16:34:59.090 -0500
OS Version:            macOS 11.2 (20D64)
Report Version:        12
Bridge OS Version:     5.2 (18P4346)
Anonymous UUID:        9C234BA2-0174-CEDF-ADD9-33AD591B297B


Time Awake Since Boot: 94000 seconds

System Integrity Protection: enabled

Crashed Thread:        0  Dispatch queue: com.apple.main-thread

Exception Type:        EXC_CRASH (SIGABRT)
Exception Codes:       0x0000000000000000, 0x0000000000000000
Exception Note:        EXC_CORPSE_NOTIFY

Application Specific Information:
abort() called

Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0   libsystem_kernel.dylib        	0x00007fff20623462 __pthread_kill + 10
1   libsystem_pthread.dylib       	0x00007fff20651610 pthread_kill + 263
2   libsystem_c.dylib             	0x00007fff205a4720 abort + 120
3   libpython3.8.dylib            	0x0000000111e7afb8 fatal_error + 712
4   libpython3.8.dylib            	0x0000000111e7b253 Py_FatalError + 19
5   libpython3.8.dylib            	0x0000000111e189f2 take_gil + 610
6   libpython3.8.dylib            	0x0000000111e7f4a2 PyGILState_Ensure + 66
7   dawdreamer.so                 	0x000000010f708089 pybind11::detail::get_internals()::gil_scoped_acquire_local::gil_scoped_acquire_local() + 25 (internals.h:264)
8   dawdreamer.so                 	0x000000010f706e65 pybind11::detail::get_internals()::gil_scoped_acquire_local::gil_scoped_acquire_local() + 21 (internals.h:264)
9   dawdreamer.so                 	0x000000010f70681d pybind11::detail::get_internals() + 93 (internals.h:267)
10  dawdreamer.so                 	0x000000010f7c3612 PyInit_dawdreamer + 162 (source.cpp:4)
11  python                        	0x000000010ecb42f5 _PyImport_LoadDynamicModuleWithSpec + 565
...

It appears that pybind11 is (maybe) trying to reference an uninitialized Python GIL? Any suggestions on debugging this issue are welcome!

Continuous Integration tests should run dawdreamer code, not just compile it

1. Need to publicly share test suite code (it's currently only on my computer) (see tests directory)
2. Need to update .travis.yml or Github actions to actually test this code.

The issue now is that the GitHub action runs the pytest but it does it before building the wheel, not after building it and simulating a fresh install.

VST3 instruments aren't working

VST3 instruments appear to be crashing when rendering. The plugins load fine but the processor crashes on myPlugin->processBlock inside PluginProcessor.cpp.

myPlugin->processBlock(buffer, myRenderMidiBuffer);
I've made sure that myPlugin isn't nullptr.

Some free VST3 instruments to try are Surge.vst3 and TAL-NoiseMaker-64.vst3.

Note that VST3 effects are working.

RFE: Pass raw soundfile data to Faust without compiling long string

Reference: soundfile

pseudocode:

soundfile_test.dsp:

process = 0,_~+(1):soundfile("firstSound",0):!,!,_,_;

In Python:

import dawdreamer
import numpy as np
engine = dawdreamer.RenderEngine(44100, 512)
faust_processor = engine.make_faust_processor("faust")
faust_processor.set_dsp_file("soundfile_test.dsp")

faust_processor.soundfiles = [
    ("firstSound", np.rand((2, 1024))),
    ("secondSound", np.rand((2, 42)))
]

Need to subclass SoundUI to MySoundUI with new empty constructor and modify addSoundFile to have new args. It should take a cons char* label (like "firstSound") and a float** soundData (the numpy data). Inside this new version of addSoundFile, it should create a Soundfile* sound_file = new Soundfile(...) and then fill in the buffers with soundData.

Then pseudocode in FaustProcessor.cpp:

auto fSoundinterface = new MySoundUI("", 44100, nullptr, false);

for soundLabel, soundData in soundLabels, datas:
    // soundLabel is "
    fSoundInterface->addSoundfile(soundLabel, data);

m_dsp->buildUserInterface(fSoundinterface);

Got all zero WAV file with HALion VST

Hello. I tested the make_plugin_processor with some different plugins. For halion 6, with preset FXP file saved by FLStudio, the results is all zero wav file. I tried different preset settings, It`s output always zero. But for some other plugins , like DSK string / DSK Dynamic Guitars and Mini_DiZi.dll, the result is normal, I got voiced wav file.
Have someone ever face this problem? Is these any parameters needed to be setting for halion 6 ? Very thanks!

Git tags for releases?

Hi. I'd like to include DawDreamer on https://libreav.org/updates but there aren't any git release tags for the project. Might these be possibly going forward? Any message attached to the git tag will appear in /releases.atom for this repo (or it can be done through the GH interface with Markdown).

RFE: Render and fetch multiple processors' audio at once

pseudocode:

some_processor.render = True
other_processor.render = True

add_processor = engine.make_add_processor("add", [1., 1.])

engine.load_graph([
    (some_processor, []),
    (other_processor, []),
    (add_processor, [some_processor.get_name(), other_processor.get_name()])
])

engine.render(10.)
a1 = engine.get_audio()  # same as before
a2 = engine.get_audio(some_processor.get_name())  # get specific intermediate processor's audio
a3 = engine.get_audio(other_processor.get_name())  # get specific intermediate processor's audio

# or
a2 = some_processor.get_audio()  # get specific intermediate processor's audio
a3 = other_processor.get_audio()  # get specific intermediate processor's audio

RFE: Programmatic routing of modular synthesizers

Can we build complex modular synthesizer routings with Python code? Controlling a synthesizer's filter cutoff with an ADSR?

Mockup code:

synth = engine.make_synth_processor()

adsr1 = synth.make_adsr()  # for controlling volume
adsr2 = synth.make_adsr()  # for controlling filter cutoff

wavetable = synth.make_wavetable()  # default sine wave based on MIDI key.

filter = synth.make_filter(mode='low')  # low-pass filter
filter.freq = 7000.  # baseline cutoff in Hz.

# connect wavetable to filter and then filter to synth output
wavetable.output.connect(filter.input)  # ugly?
filter.output.connect(synth.output)

routing1 = adsr1.output.connect(wavetable.level, mode='unipolar', amount=1.)  # control volume
routing2 = adsr2.output.connect(filter.freq, mode='unipolar', amount=2000.)  # control cutoff

# finally, make graph
graph = [
    (synth, [])  # no inputs necessary to this synth
]

engine.load_graph(graph)
engine.render(10.)

# change parameters or routing and render again
filter.mode = 'bandpass'  # switch to bandpass filter.
routing2.amount = 3000.

engine.render(10)

synth.remove_routing(routing2)  # remove the filter routing

engine.render(10)

Another cool audio project is signalflow which has similarities to TensorFlow. You create nodes and use built-in operators on them such as +, -, /, *, %.

RFE: Output one block at a time in a for-loop

Currently the RenderEngine.render method takes a duration in seconds. It would be good if there was a way to render one block at a time, without starting from zero each time.

The user would be responsible for appending the new block of samples to whatever data object they like, but that seems reasonable. It would also be important to access any processor whose ".record" property is enabled.

Related: This could also help with this issue: #43

Why my code is not working?

Hi,

I am starting with python, I am not able to understand how the graph work.


import dawdreamer as daw
import numpy as np
from scipy.io import wavfile
import librosa

SAMPLE_RATE = 44100
BUFFER_SIZE = 512
DURATION = 15

def load_audio_file(file_path, duration=None):
  sig, rate = librosa.load(file_path, duration=duration, mono=False, sr=SAMPLE_RATE)
  assert(rate == SAMPLE_RATE)
  return sig

engine = daw.RenderEngine(SAMPLE_RATE, BUFFER_SIZE)
meu_plugin = engine.make_plugin_processor("plugin", "C:/Users/neimog/OneDrive_usp.br/Documents/Plugins/VST3/dearVR MICRO.vst3")

meu_plugin.set_parameter(0, 45) # override a specific parameter.
print(meu_plugin.get_parameter_name(0))

multi =  load_audio_file("C:/USERS/NEIMOG/ONEDRIVE_USP.BR/DOCUMENTS/OpenMusic/OUT-FILES/om-ckn/Fl-mul-G#5-G4+-D6-100-cents.wav-v-stereo.wav")

graph = [
  (meu_plugin, [multi])] ##  

engine.load_graph(graph)
engine.render(DURATION)
audio = engine.get_audio()  

wavfile.write('my_song2.wav', SAMPLE_RATE, audio.transpose()) # don't forget to transpose!

How make the audio multi be processed by meu_plugin

Thank you very much!!!!!

Support for VST2 format on Windows?

Hallo,

I want to know whether DawDreamer will support VST2 plug-in format on Windows platform? Because as I know, JUCE has already removed VST2 SDK, and also in your provided VS2019 solution, these following two includes failed (juce_VSTPluginFormat.cpp):
#include <pluginterfaces/vst2.x/aeffect.h>
#include <pluginterfaces/vst2.x/aeffectx.h>

Actually, I have once successfully built RenderMan for Windows and tested it. The built dynamic libarary librenderman has no problem to load a VST3 plug-in (which has extension .vst3), but could not load VST2 plug-in (.dll). But my built version of RenderMan is not from original repo but from this repo.

I see in your example code, you have used plug-in in .dll format, which I think should be a VST2 format, if it is on Windows. So I am really interested to know whether you are sure that DawDreamer supports VST2 plug-ins on Windows. Or if you could tell me why in my buiding trying of DawDreamer, the aforehand include errors happen, it would really be helpful for my project, I have been searching for a command-line python VST-host/DAW for Windows for a long time!!

Thanks, I am fairly a fresh programmer yet.

Issue with pip install on Windows

I am in the Windows 10 console.
And when I try to install it via pip it returns

ERROR: Could not find a version that satisfies the requirement dawdreamer (from versions: none)
ERROR: No matching distribution found for dawdreamer

Opening and showing UI of VSTs

Would it be possible to open the UI of VSTs?
Main reason is to automate some tasks via image recognition and automated mouse clicking.

For example looping over presets is in most plugins not supported and has to be done manually (look for button and click on).

PS: The project I was looking for 2 years. Played around with renderman but did not find the time to extend it.

make SidechainCompressor class for sidechain compression

https://en.wikipedia.org/wiki/Dynamic_range_compression#Side-chaining

https://github.com/DanielRudrich/SimpleCompressor

Check out DawDreamer’s AddProcessor.h class for how it sums multiple buffers together. The input buffer will have four channels. We need to calculate a sidechain amount from channels 2 and 3 to apply to channels 0 and 1.

sidechain = engine.make_sidechain_compressor_processor("my_bus")
sidechain.threshold = -6. # threshold in dB
sidechain.ratio = 2. # ratio
sidechain.attack = 2. # attack in milliseconds
sidechain.release = 50. # release in milliseconds

vocals = engine.make_playback_processor("vocals", vocals_data)
drums = engine.make_playback_processor("drums", drums_data)

graph = [
    (vocals, []),
    (drums, []),
    (sidechain, ["vocals", "drums"])
]

# the first input "vocals" would be the primary signal. Its volume would be attenuated
# more when "drums" is louder.

engine.load_graph(graph)
engine.render(10)

Update 2020-12-08: I've confirmed that certain VST effects plugins can do sidechain compression if you pass two inputs. The second input will determine how much to compress. You should also make sure the VST parameters are configured to enable the secondary bus.

Update 2021-05-09: The new FaustProcessor would be great for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.