Coder Social home page Coder Social logo

resonance-audio / resonance-audio Goto Github PK

View Code? Open in Web Editor NEW
489.0 489.0 108.0 30.15 MB

Resonance Audio Source Code

Home Page: https://resonance-audio.github.io/resonance-audio/

License: Apache License 2.0

CMake 2.01% Shell 0.17% MATLAB 3.95% C++ 85.26% C# 8.08% ShaderLab 0.15% Objective-C 0.06% Objective-C++ 0.06% C 0.26%

resonance-audio's Introduction

Resonance Audio Source Code Travis CI

This is the official open source project for the Resonance Audio SDK. This repository consists of the full source code of the Resonance Audio C++ library, as well as the platform integrations into Unity, FMOD, Wwise and DAW tools.

Resonance Audio started as a Google product and has since graduated to open source. It is supported by members of our steering committee who are also project committers.

In this document there are some quick instructions for how to build the SDK from source code.

For more detailed documentation about using the SDK, visit our developer docs. If you are interested in contributing to the project, please read the Contributing to Resonance Audio section below.

Build Instructions

Clone the repository:

git clone https://github.com/resonance-audio/resonance-audio $YOUR_LOCAL_REPO

Software Requirements

In addition to the system C++ software development platform tools / toolchains, the following software is required to build and install the Resonance Audio SDKs:

Note: For Windows builds, Visual Studio 2015 is recommended.

Third Party Dependencies

All third party dependencies must be installed into the third_party subfolder in the repository. To simplify the installation, bash scripts are located within the third_party that automatically clone, build and install the required third party source code.

Note: On Windows, these scripts can be executed in the Git-Bash console (which gets installed as part of Git for Windows).

To clone the dependencies into the repository, run:

./$YOUR_LOCAL_REPO/third_party/clone_core_deps.sh

Note: These dependencies do not need to be built, since their source code is directly pulled in from the build scripts.

Unity Platform Dependencies (nativeaudioplugins, embree, ogg, vorbis)

The Unity plugin integrates additional tools to estimate reverberation from game geometry and to capture Ambisonic soundfields from a game scene. These features require the Embree, libOgg and libVorbis libraries to be prebuilt.

To clone and build the additional Unity dependencies, run:

./$YOUR_LOCAL_REPO/third_party/clone_build_install_unity_deps.sh

FMOD Platform Dependencies (FMOD Low Level API)

To add the additional FMOD dependencies, download and install the FMOD Studio API (which includes the FMOD Low Level API).

Note: On Linux, unzip the downloaded package within the third_party subfolder and rename its folder to fmod.

Wwise Platform Dependencies (WwiseIncludes)

To clone the additional Wwise dependencies, run:

./$YOUR_LOCAL_REPO/third_party/clone_wwise_deps.sh

The Wwise Authoring Plugin (Windows only) also requires the Microsoft Foundation Classes SDK. To install the SDK on Windows:

  1. Open the Control Panel
  2. Select Programs->Programs and Features
  3. Right-click on Microsoft Visual C++ Build Tools, and select Change
  4. Install MFC SDK

DAW Tools Dependencies (VST2 Audio Plug-Ins SDK)

To add the additional DAW Tools dependencies, download the Steinberg's VST 3.X.X Audio Plug-Ins SDK (which includes the VST2 Audio Plug-Ins SDK) and extract the package into third_party subfolder.

Build Resonance Audio SDKs

This repository provides the build.sh script in the root folder that configures the build targets, triggers the compilation and installs the artifacts for the specified platform into the target installation folder.

The script provides the following flags:

  • t=|--target=
    • RESONANCE_AUDIO_API: Builds the Resonance Audio API
    • RESONANCE_AUDIO_TESTS: Runs the Resonance Audio unit tests
    • GEOMETRICAL_ACOUSTICS_TESTS: Runs the geometrical acoustics specific unit tests.
    • UNITY_PLUGIN: Builds the Resonance Audio plugin for Unity
    • WWISE_AUTHORING_PLUGIN: Builds the Resonance Audio authoring plugin for Wwise
    • WWISE_SOUND_ENGINE_PLUGIN: Builds the Resonance Audio sound engine plugin for Wwise
    • FMOD_PLUGIN: Builds the Resonance Audio plugin for FMOD
    • VST_MONITOR_PLUGIN: Builds the Resonance Audio VST Monitor Plugin
  • p=|--profile=
    • Debug: Debug build
    • RelWithDebInfo: Release build with debug information
    • Release: Release build
  • --msvc_dynamic_runtime
    • Enables dynamic linking against the run-time library on Windows (/MD, /MDd). By default, all Windows builds are statically linked against the run-time library (/MT, /MTd). Note that the third party dependencies must be compiled with the same options to avoid library conflicts.
  • --verbose_make
    • Enables verbose make/build output.
  • --android_toolchain
    • Enables the Android NDK toolchain to target Android builds (may require adjustments to ANDROID_NDK, ANDROID_NATIVE_API_LEVEL and ANDROID_ABI script variables). For more information, see project documentation at https://github.com/taka-no-me/android-cmake.
  • --ios_os_toolchain
  • --ios_simulator_toolchain
E.g.

To build and run the Resonance Audio unit tests:

./$YOUR_LOCAL_REPO/build.sh -t=RESONANCE_AUDIO_TESTS

Citations

If you find Resonance Audio useful and would like to cite it in your publication, please use:

Gorzel, M., Allen, A., Kelly, I., Kammerl, J., Gungormusler, A., Yeh, H., and Boland, F., "Efficient Encoding and Decoding of Binaural Sound with Resonance Audio", In proc. of the AES International Conference on Immersive and Interactive Audio, March 2019

The full paper is available (open access) at: http://www.aes.org/e-lib/browse.cfm?elib=20446 (BibTeX)

Contributing to Resonance Audio

If you would like to contribute changes to the Resonance Audio project, please make a pull request for one of our project committers to review.

Steering Committee

The Resonance Audio project is overseen by a steering committee established to help guide the technical direction of the project in collaboration with the entire developer community.

The intention of the steering committee is to cultivate collaboration across the developer community for improving the project and ensuring Resonance Audio continues to work well for everyone.

The committee will lead the Resonance Audio project in major decisions by consensus and ensure that Resonance Audio can meet its goals as a truly open source project.

The steering committee consists of the following members (company name ordered):

  • Martin Dufour, Audiokinetic
  • Aaron McLeran, Epic Games
  • Mathew Block, Firelight Technologies
  • Alper Gungormusler, Google
  • Eric Mauskopf, Google
  • Haroon Qureshi, Google
  • Ian Kelly, Google
  • Julius Kammerl, Google
  • Marcin Gorzel, Google
  • Damien Kelly, Google (YouTube)
  • Jean-Marc Jot, Magic Leap
  • Michael Berg, Unity Technologies

Affiliations are listed for identification purposes only; steering committee members do not represent their employers or academic institutions.

resonance-audio's People

Contributors

aclockworkkelly avatar anokta avatar claywilkinson avatar erikthysell avatar fredsa avatar haroonq avatar jkammerl avatar marcmutz avatar mauskopf avatar mgorzel avatar pushrax avatar seba10000 avatar tak avatar tonetechnician avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

resonance-audio's Issues

Custom integration

Is there any documentation/tutorial on how to integrate it with custom audio/game engine? Like in case of Steam Audio.

Multiple listeners or more ears

Thanks for a great library.

Is it possible to add multiple listeners in the same scene?
Or alternatively, add more ears to the listener.

Head coordinate convention is not documented.

It's not documented (that I could find at least) which direction corresponds to up, forwards, right etc.. for the head model.

After some reverse engineering I have determined that Resonance follows the OpenGL convention of x = right, y = up, and -z = forwards.

As this is a matter of convention, it should be documented.

Building VST plugin

Hi there,
Having some issues getting VST dependencies set up properly - the link in README points to a download which may be different than what build script expects (e.g. aeffectx.h to exist, but it doesn't). Sourcing VST2 SDK from elsewhere works around this, with the correct source files seemingly in place, but I'm hitting more errors down the line during compile with undeclared base classes and whatnot. Apologies for the brief-and-vagueness, but I'm assuming this is just an outdating issue and a link to appropriate VST2/VST3 SDK versions would solve the problem.

Let me know if you need more specific paths, errors or other information!

Is there docs or some hierarchy explanation to call resonance-audio directly instead of via all kinds of SDK?

Hi, I want to integrate resonance-audio src code directly. It seems only via GvrAudioEngine or Android or web or Unity SDK, is supported. And I tried to call the APIs directly, it didn't work out. Is there some docs about call thest c++ API directly, or I found there is some unit Test in the repo, but it doesn't seem to meet my needs.

Does anyone have the same issue? Looking forward to replies. Thanks~

Enable/Disable Wwise Resonance Plugin at Runtime within Unreal Engine project?

Hello there
I've been experimenting with the Resonance Wwise plugin within Unreal Engine.
I have everything working and it sounds great.
I was wondering how I would go about implementing a way to enable/disable/bypass the Resonance Wwise Plugin at runtime whilst my Unreal application is running?

I'd like to include a menu option so the user can switch it on/off whilst the program is running.

The issue I can see is that the bus would need to dynamically switch from ambisonic to stereo, and I'm not sure how this can be dealt with inside the Wwise project.
Thanks for your help

Robin

Resonance audio fmod plugin works with fmod 2.0?

Is the resonance audio fmod plugin fully compatible with the 2.0 releace of the FMOD?

I've tried to use it, but when I get the DSP parameter description of each one of them, instead of get the corresponding name or description the only that i got (after cast it) is a array of unprintable symbols.

This is using fmod v2.0, core api with the c# header, using the resonance audio DLL bundled with that version of FMOD.

If I do the same action, but using fmod 1.10.12 that data display correctly :)

thanks!

Visual Studio 2022 cmake fix

Sorry to be like this but creating a pull request is just too much effort.
In case anyone is having trouble compiling.
You need to modify all sh files you build with and set MSVC_GENERATOR to your Visual Studio version.
Eigen needs to be an older branch like 3.2 instead of master.
In some files like clone_build_install_unity_deps.sh they append Win64 to MSVC_GENERATOR to make the WIN64_GENERATOR_FLAG which breaks x86_64 compilation.

Add documentation for fmod low level / core api

Hello guys,

Please if it possible, can add a page how to use the resonance audio plugin for FMOD directly with the low level api instead the fmod studio authoring tool?

In my case, I prefer that, because I'm blind and the fmod studio tool isn't accessible; but use the api directly with c++ or c# is completely possible.

So, in my case I get load the plugin, get the listener and source DSP, and apply them one to the master channel and the other to a sound channel, but can't get the hrtf effect be applied.

And I don't know how to pass to the DSP the arguments of the source (or the listener) like gain, distance, spread, position, etc.

If is possible have a documentation or example of that will be very appreciated.

Thanks!

Inconsistent diffused energy levels

Hi,

I've been testing the resonance ray tracing for unity. I found that the reverb rain algorithm under estimates the diffused energy. Wrapped it in python to plot energy impulse response:
image

Seem's that there's some underestimation of the diffused energy. My suspicion is that it is in this line in the code:

const float diffuse_rain_energy_factor =

Did some math and according to my calculations the factor is supposed be:
diffuse_rain_energy_factor = 4* kPi * direction_pdf / distance_to_listener_on_ray / distance_to_listener_on_ray;
(Note the scalar factor difference 4 * kPi)

Would love for your opinion on that.
Emil

Hard-coded paths in ResonanceAudioReverbBakingWindow.cs

Line 80/81:

private const string materialMapperAssetPath =
"Assets/3rdParty/ResonanceAudio/Resources/ResonanceAudioMaterialMapper.asset";

This obviously breaks if Resonance is moved to a subdirectory. Surely there's a better way to handle this? I'm rather keen on keeping 3rd Party assets separate from my project's assets.

Near-field in stereo

Hi, developing a game in first person, we would prefer to play a lot of the first person specific sounds in stereo (or possibly ambisonic soundfield),

In FMOD we have a stereo sound event without a spatializer, we then add a new Resonance Spatializer specific event on top, which converts the source event from stereo to mono. What we want is for the sound to play in stereo in the near-field, and then use HRTF when the sound is played beyond the near-field. Is this currently possible?

We found a work-around in FMOD, but then we can't use Resonance Spatializer for the first person variant... which means that Resonance Reverb won't affect first person sounds. Could the Resonance Soundfield be used (if we apply ambisonic sources at the source instead of stereo) instead, and would that receive reverb?

In Steam Audio they have this where they will use HRTF on mono sound for things beyond the near-field, but fall back to stereo for near-field audio sources, which would work much better for us in a first person game.

Any insight or possible workarounds would be much appreciated.

Audio Factory compatibility

For my college class, I wanted to showcase the resonance audio demo app, Audio Factory.

I was somewhat disappointed after being told by the Playstore: My recent devices (Android 10, Snapdragon 660 and 732, the latter with headphone jack available) are not compatible !

Is there any chance for the compatibility issues to be resolved? It is however unclear what is the cause of the problem.

How to use Occulusion

In the Fundamental Concepts section you mention occlusion: https://resonance-audio.github.io/resonance-audio/discover/concepts.html

How is this supposed to be used in practice?

The only relevant API call seems to be SetSoundObjectOcclusionIntensity.

Is the expectation that the 'occlusion intensity' is computed by the client code, and passed to Resonance?

I could compute this by ray tracing between the sound source and the listener position and checking for occluders, is that what you had in mind?

Thanks!

Custom HRTFs

Hello - is it possible to use a custom HRTF (SOFA) with Resonance? I can't find any documentation on the HRTF implementation. Thanks!

clone_core_deps.sh seems out-of-date?

When using said script to clone Eigen and pffft libraries, it seems to be trying to use Mercurial to grab non-existent repos (Eigen for example, has moved over to GitLab). I get http authorisation prompts and 403s or aborts when trying the script, seemingly because of this.

Workaround was to replace mercurial commands with git clone commands, pointing to correct repo URLs - but wanted to check if there was any other reason this wasn't updated?

Building UNITY andriod plugin right way

Hello!

I am trying to build android binary for unity plugin on ubuntu linux 16.04. It builds successfully with cmake 3.5.1 and android-ndk-r15c.

./build.sh -t=UNITY_PLUGIN --android_toolchain --profile=Release

this build works on android, but have 100% DSP CPU with laggy sound. Similar to #8 .
i have added

 project(ResonanceAudio)
+set(CMAKE_BUILD_TYPE Release)

to CMakeLists.txt to force -O3 flag, then manually stripped binary with

cd build
make install/strip

Now i have binary with size 722620 bytes and it plays normal.
But binary from unity plugin distribution has size 460524 bytes or another number of symbols in library.

Also my binary consumes 40% DSP CPU, but plugin distribution binary consumes 20% DSP CPU.

This will be real problem in future.

All tests was on simple project with one audio source, one listener and enabled spatializing.

Where is my mistake?

Add diffuse-field coherence matching feature

It would be nice if the diffuse-field coherence matching [1] algorithm is implemented as in AmbiBIN plugin in https://github.com/leomccormack/SPARTA.
It makes quite a big difference when I tried it out.

The implementation below might help.
https://github.com/leomccormack/Spatial_Audio_Framework/blob/c6d468e42f73c3f1622332474d33530eb6fe523b/framework/modules/saf_hoa/saf_hoa.c

[1] Zaunschirm, Markus, Christian Schörkhuber, and Robert Höldrich. 2018. “Binaural Rendering of Ambisonic Signals by Head-Related Impulse Response Time Alignment and a Diffuseness Constraint.” The Journal of the Acoustical Society of America 143 (6): 3616.

visionOS support

Hello,
Thanks for your great library.
How can I do if I want to use support visionOS?
Thanks by advance for help.

Linking error

Dear all,
trying to compile this project for unity. Downloaded CMake, Git, Mercurial and Visual Studio 2015. I am working on Windows 10 x64.

Cloned the repository and executed:
./$YOUR_LOCAL_REPO/third_party/clone_core_deps.sh
./$YOUR_LOCAL_REPO/third_party/clone_build_install_unity_deps.sh

Then tried to compile with ./build.sh -t=UNITY_PLUGIN
But getting the following error

unity_win.def : error LNK2001: unresolved external symbol SetRt60ValuesAndProxyRoomProperties [.$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj]
.$YOUR_LOCAL_REPO/build/platforms/unity/Release/audiopluginresonanceaudio.lib : fatal error LNK1120: 1 unresolved externals [.$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj]
Compiling project ".$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj" (default target) NOT COMPLETED.
Compiling project ".$YOUR_LOCAL_REPO\build\ALL_BUILD.vcxproj" (default target) NOT COMPLETED.
Compiling project ".$YOUR_LOCAL_REPO\build\install.vcxproj" (default target) NOT COMPLETED.

Compiling NOT SUCCEDED.

".$YOUR_LOCAL_REPO\build\install.vcxproj" (default target) (1) ->
".$YOUR_LOCAL_REPO\build\ALL_BUILD.vcxproj" (default target) (3) ->
".$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj" (default target) (9) ->
(destination: Link) ->
unity_win.def : error LNK2001: unresolved external symbol SetRt60ValuesAndProxyRoomProperties [.$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj]
.$YOUR_LOCAL_REPO/build/platforms/unity/Release/audiopluginresonanceaudio.lib : fatal error LNK1120: 1 unresolved externals [.$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj]

Thanks and regards,
Daniel Pinardi

Creating static library from resonance audio source code

Hi has everyone created a static library (.lib file) from the source code?
I am trying to do it with visual studio but have problems to achieve this. I don't
have any #include errors but getting several errors in the files before and after compiling.
Does anyone have done this with visual studio and can share the project or upload a .lib file?

which headers do I need when using from C?

Hi,

I've successfully compiled on Windows 10 with VS2017, but only see 2 headers in the install/includes folder. That can't be it, since for e.g the web audio bindings have many classes and methods to represent the various nodes.

Which other files do I need to use it from C?

Output different between 24kHz and 48kHz

Sounds played at 24kHz will play at a lower volume than sounds played at 48kHz.

This was picked up in a project made in Unity using FMOD, but also occurs in Unity's built in audio.

To reproduce:

  • Set up two sounds, one using Resonance and one not.
  • Set the system sample rate to 48kHz.
  • Play the sounds and notice that they play at the same levels.
  • Change the sample rate to 24kHz.
  • Play the sounds and notice that the sound using Resonance is at a lower level than the other.

Slowed and stuttered audio with Oculus SDK

Dear all,
I successfully recompiled the resonance audio Unity plugin with android toolchain (in Linux environment).
I developed a VR app with the ResonanceAudioSoundfield prefab, for decoding an Ambisonics audio track (1st order, 4 channels).

Here the problem: if I use the cardboard SDK in unity, everything works well, audio is fluid and correctly spatialized, but instead if I use the Oculus SDK (I would like to target Samsung Gear VR or Oculus GO) the audio is corrupted. I can still perceive the correct spatialization, but audio is really slowed down and stuttering.

It seems a buffering problem...

Is there any way to solve?

Segfault in SetInterleavedBuffer if number of channels doesn't agree with listener number of channels

Running into a weird bug - could be my usage but I'm not sure.

I have an SDL wrapper that gives me the most applicable sound device possible (I request 8 channels but I'm fine with anything it hands back to me aside from the format, which I force to host-order f32 - frequency and number of channels can be different).

I then pass the obtained (actual) number of channels and frequency information to CreateResonanceAudioApi.

Elsewhere, I create a non-positional stereo sound object (for background music, in my case) via CreateStereoSource. Since it's coming from a stereo OGG file, I specify 2 channels (e.g. CreateStereoSource(2)).

When the audio is actually processing, I call:

rapi->SetInterleavedBuffer(resaud_id, samples, 2, num_samples);

Where resaud_id is the return value from CreateStereoSource, samples is a float* of length num_channels * num_samples (in this case, num_channels is 2 for stereo), 2 to indicate there are two channels present, and then num_samples is set to the number of samples per-channel.

When the underlying device gives me 2 channels back, everything works fine. When it gives me back anything other than 2 channels, I get a segfault within the call to SetInterleavedBuffer.

Full stack trace:

Stack trace:
32      0x7fff62d2240d thread_start + 13
31      0x7fff62d26249 _pthread_start + 66
30      0x7fff62d232eb _pthread_body + 126
29         0x10cd3ea95 RunThread + 21
28         0x10cbda5d4 SDL_RunThread + 132
27         0x10cd26dc7 audioqueue_thread + 215
26      0x7fff36c008be CFRunLoopRunSpecific + 455
25      0x7fff36c014ec __CFRunLoopRun + 2524
24      0x7fff36c194f5 __CFRunLoopDoSource1 + 527
23      0x7fff36c19597 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 41
22      0x7fff355ad8a4 mshMIGPerform + 220
21      0x7fff355ada96 _XCallbackNotificationsAvailable + 33
20      0x7fff355ade6c AQCallbackReceiver_CallbackNotificationsAvailable + 121
19      0x7fff355ae007 ClientAudioQueue::FetchAndDeliverPendingCallbacks(unsigned int) + 293
18      0x7fff355c2fe9 AQClientCallbackMessageReader::DispatchCallbacks(void const*, unsigned long) + 195
17      0x7fff355c4c43 ClientAudioQueue::CallOutputCallback(AudioQueueBuffer*) + 247
16         0x10cd279ee outputCallback + 798
15         0x10cadc35d on_sdl_audio_wants_next_buffer(void*, unsigned char*, int) + 365
14         0x10cadcac0 void entt::basic_view<entt::entity, entt::exclude_t<>, mygame::component::audio_playback>::each<on_sdl_audio_wants_next_buffer(void*, unsigned char*, int)::$_0>(on_sdl_audio_wants_next_buffer(void*, unsigned char*, int)::$_0) const + 96
13         0x10cae4f77 on_sdl_audio_wants_next_buffer(void*, unsigned char*, int)::$_0 std::for_each<entt::basic_storage<entt::entity, mygame::component::audio_playback, void>::iterator<false>, on_sdl_audio_wants_next_buffer(void*, unsigned char*, int)::$_0>(entt::basic_storage<entt::entity, mygame::component::audio_playback, void>::iterator<false>, entt::basic_storage<entt::entity, mygame::component::audio_playback, void>::iterator<false>, on_sdl_audio_wants_next_buffer(void*, unsigned char*, int)::$_0) + 71
12         0x10cae5098 auto on_sdl_audio_wants_next_buffer(void*, unsigned char*, int)::$_0::operator()<mygame::component::audio_playback>(mygame::component::audio_playback&) const + 56
11         0x10cb45669 mygame::component::audio_playback::next(std::shared_ptr<vraudio::ResonanceAudioApi>) + 201
10         0x10c8c61d3 vraudio::ResonanceAudioApiImpl::SetInterleavedBuffer(int, float const*, unsigned long, unsigned long) + 51
9          0x10c8c621b void vraudio::ResonanceAudioApiImpl::SetSourceBuffer<float const*>(int, float const*, unsigned long, unsigned long) + 59
8          0x10c917d82 vraudio::LocklessTaskQueue::Execute() + 66
7          0x10c917fad vraudio::LocklessTaskQueue::ProcessTaskList(vraudio::LocklessTaskQueue::Node*, bool) + 285
6          0x10c9183e5 std::function<void ()>::operator()() const + 53
5          0x10c8d2451 std::__function::__func<vraudio::ResonanceAudioApiImpl::CreateStereoSource(unsigned long)::$_4, void ()>::operator()() + 33
4          0x10c8d360d void std::__invoke_void_return_wrapper<void>::__call<vraudio::ResonanceAudioApiImpl::CreateStereoSource(unsigned long)::$_4&>(vraudio::ResonanceAudioApiImpl::CreateStereoSource(unsigned long)::$_4&&&) + 29
3          0x10c8d365d decltype(std::forward<vraudio::ResonanceAudioApiImpl::CreateStereoSource(unsigned long)::$_4&>(fp)()) std::__invoke<vraudio::ResonanceAudioApiImpl::CreateStereoSource(unsigned long)::$_4&>(vraudio::ResonanceAudioApiImpl::CreateStereoSource(unsigned long)::$_4&&&) + 29
2          0x10c8d36b9 vraudio::ResonanceAudioApiImpl::CreateStereoSource(unsigned long)::$_4::operator()() const + 57
1       0x700004f98e60 5   ???                                 0x0000700004f98e60 0x0 + 123145385774688
0       0x7fff62d1ab5d _sigtramp + 29
2020-01-04 19:59:49.703 (   0.329s) [AudioQueue threa]                       :0     FATL| Signal: SIGSEGV

Is there something I'm missing about how to call SetInterleavedBuffer? Can I not render stereo (2 channel) OGG audio to a surround-sound (>2 channel) resonance audio API instance?

Thank you for any information :)

Fails to build in C++20

Hello,

resonance_audio/base/aligned_allocator.h is using facilities that have been deprecated in C++17 and removed in C++20:

template <typename Type, size_t Alignment>
class AlignedAllocator : public std::allocator<Type> {
 public:
  typedef typename std::allocator<Type>::pointer Pointer;
  typedef typename std::allocator<Type>::const_pointer ConstPointer;

These typedefs no longer exist in C++20 https://en.cppreference.com/w/cpp/memory/allocator .

These should either be replaced with the equivalent types:

  using Pointer = Type *;
  using ConstPointer = const Type *;

Even better (and possibly together with SizeType) they should actually be routed through allocator_traits, since it's allowed to specialize allocator for user-defined types.

Relevant Qt patch:
https://codereview.qt-project.org/c/qt/qtmultimedia/+/419240/1/src/3rdparty/resonance-audio/resonance_audio/base/aligned_allocator.h#b77

Building resonance audio SDK on ARM boards like Raspberry Pi or Jetson

hi

is it possible to build the resonance audio SDK on ARM8 CPU boards like the Raspberry Pi or the Jetson Nano/NX, running raspbian/ubuntu respectively?

I am trying to build the SDK but I got some errors, related to SSE instruction set which do not exist on ARM (Ibuilieve the equiv is Neon instructions for ARM cpus):

[  2%] Building C object resonance_audio/CMakeFiles/PffftObj.dir/__/third_party/pffft/fftpack.c.o
c++: error: unrecognized command line option ‘-msse’; did you mean ‘-fdse’?
c++: error: unrecognized command line option ‘-msse2’
c++: error: unrecognized command line option ‘-msse3’

After cloning the main repo and dependencies repo, here is the command I entered for the build:
./resonance-audio/build.sh -t=RESONANCE_AUDIO_TESTS -t=RESONANCE_AUDIO_API -t=GEOMETRICAL_ACOUSTICS_TESTS -t=WWISE_SOUND_ENGINE_PLUGIN

thanks for your help.

Sources behind the listener sound muffled, and it is difficult to tell that it is supposed to be behind

Hi,

When a sound is behind the listener, it sounds muffled and it is difficult to tell that it is supposed to be behind you.

This is an audio sample of this. The file contains the same source, moving counterclockwise in a unit circle around the listener using 3 different audio libraries: Resonance Audio, Oculus spatializer and Steam Audio. The positioning of the other 2 libraries for sources behind you is much better.

Generate Unity plugin with different set of HRTFs (SHHRIRs)

Dear all,
I'm trying to compile this project for unity, but using a different set of SHHRIRs (also derived from the SADIE dataset). I have downloaded CMake, Git, Mercurial and Visual Studio 2015 and I am working on Windows 10 x64.
The steps I followed are:

  1. Downloaded the repository and executed:
    ./$MY_LOCAL_REPO/third_party/clone_core_deps.sh
    ./$MY_LOCAL_REPO/third_party/clone_build_install_unity_deps.sh

  2. Substitute files in ./$MY_LOCAL_REPO/third_party/SADIE_hrtf_database/WAV/Subject_002/DFC/48K_24bit with own files (exact same set but modified amplitudes in the responses between 0 and 90º azimuth in order to have a "null zone" that will help to check easily if the new set is loaded).

  3. Run loadsadie.m three times with input parameters 1, 2 and 3 to generate up to the 3rd order ambisonic SADIE HRIR. The three executions returned SUCCESS! and the corresponding files are generated.

  4. Run sadieshhrirs.m three times with shelfFilter set to true in order to generate the SHHRIRs. The three executions run succesfully and generate the corresponding files.

  5. Run sadieshhrirstest.m to verify the generated SHHRIRs, with success result for the 3 orders.

  6. Substitute the files in ./$MY_LOCAL_REPO/third_party/SADIE_hrtf_database/WAV/Subject_002/SH with the newly generated files.

  7. Delete the contents in ./$MY_LOCAL_REPO/third_party/SADIE_hrtf_database/generated in order to run generate_hrtf_assets.py and generate the assets corresponding to the new files. No modifications have been done to neither the script nor hrtf_assets.iad since the paths and filenames are exactly the same as the originals.

  8. Run ./$MY_LOCAL_REPO/build.sh -t=RESONANCE_AUDIO_TESTS and all tests passed.

  9. Run ./$MY_LOCAL_REPO/build.sh -t=UNITY_PLUGIN and export the package from the generated Unity project.

  10. Create new unity project and import the plugin, set both the spatializer and the ambisonic decoder plugins to Resonance Audio.

At this point I thought that when running the ResonanceAudioDemo included, I would be able to check that no sound is being received by the listener when the source is placed between 0 and 90º azimuth. However, when testing it, the audio response in that area is exactly the same as when using the original set of SHHRIRs (and most importantly, audio is being received).
From my initial understanding of the code, the SHHRIRs are loaded as assets and therefore the previously described steps would be needed to compile a plugin based on a different sets of SHHRIRs, but I cannot find in which point I messed up or if the SHHRIRs are being read from a different place.

I would really appreciate if you could throw some light on this issue since I've been fighting with it but I'm not able to find the proper source.

Thanks and best regards,
Jorge

[QUESTION] Specific user speaker configuration

Hello,
Thanks for your great library.
How can I do if a want to use an other HOA decoder than binaural ?
I would like to output HOA stream in a specific loudspeaker configuration.
Thanks by advance for help.

Etienne

unable to compile unity plugin with android toolchain in windows

Dear all,

I would like to compile the resonance audio plugin for unity in Windows, with also the android support (which should be located in ResonanceAudio/Plugins/Android/libs/armeabi-v7a/libaudiopluginresonanceaudio.so) but unfortunatly having problems.

This is my configuration: Windows 10 + gitBash + mercurial + cmake3.11
After having cloned and built core dependencies and unity dependencies, I am able to compile correcly the unity plugin ( ./build.sh -t=UNITY_PLUGIN ), but when I try ./build.sh -t=UNITY_PLUGIN --android_toolchain , I get this error:

CMake Error at third_party/android-cmake/android.toolchain.cmake:616 (message):
Could not find any working toolchain in the NDK. Probably your Android NDK
is broken.
Call Stack (most recent call first):
C:/Program Files (x86)/cmake-3.11.0-win64-x64/share/cmake-3.11/Modules/CMakeDe termineSystem.cmake:94 (include)
CMakeLists.txt:39 (project)

CMake Error at CMakeLists.txt:39 (project):
CMAKE_SYSTEM_NAME is 'Android' but CMAKE_GENERATOR specifies a platform
too: 'Visual Studio 14 2015 Win64'

So, I modified the build.sh in this way (line 29):
from -> ANDROID_NDK="~/android-ndk-r15c/"
to -> ANDROID_NDK="./android-ndk-r15c/"
then tried again to compile and got this error:

CMake Deprecation Warning at C:/Program Files (x86)/cmake-3.11.0-win64-x64/share/cmake-3.11/Modules/CMakeForceCompiler.cmake:69 (message):
The CMAKE_FORCE_C_COMPILER macro is deprecated. Instead just set
CMAKE_C_COMPILER and allow CMake to identify the compiler.
Call Stack (most recent call first):
third_party/android-cmake/android.toolchain.cmake:1128 (CMAKE_FORCE_C_COMPILER)
C:/Program Files (x86)/cmake-3.11.0-win64-x64/share/cmake-3.11/Modules/CMakeDetermineSystem.cmake:94 (include)
CMakeLists.txt:39 (project)

CMake Deprecation Warning at C:/Program Files (x86)/cmake-3.11.0-win64-x64/share/cmake-3.11/Modules/CMakeForceCompiler.cmake:83 (message):
The CMAKE_FORCE_CXX_COMPILER macro is deprecated. Instead just set
CMAKE_CXX_COMPILER and allow CMake to identify the compiler.
Call Stack (most recent call first):
third_party/android-cmake/android.toolchain.cmake:1140 (CMAKE_FORCE_CXX_COMPILER)
C:/Program Files (x86)/cmake-3.11.0-win64-x64/share/cmake-3.11/Modules/CMakeDetermineSystem.cmake:94 (include)
CMakeLists.txt:39 (project)

CMake Error at CMakeLists.txt:39 (project):
CMAKE_SYSTEM_NAME is 'Android' but CMAKE_GENERATOR specifies a platform
too: 'Visual Studio 14 2015 Win64'

What am I missing ?

Maybe could be of some interest the workaround I used for Windows 10 to get a working android plugin of resonance-audio for unity. I write it down here.

Open a Windows PowerShell window as Admnistrator, and execute this line:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
in order to enable the Linux subsystem.
Then go to Microsoft Store, search for a linux distribution (I opted for Ubuntu) and install it.

Lunch the distribution, locate the resonance-audio-master folder and type following commands:

sudo apt update
sudo apt-get install cmake
sudo apt-get install gcc
sudo apt-get install g++
sudo apt-get install mercurial
sudo apt-get install git
sudo apt-get install p7zip-full
wget "https://dl.google.com/android/repository/android-ndk-r15c-linux-x86_64.zip"
7z x android-ndk-r15c-linux-x86_64.zip
./third_party/clone_core_deps.sh
./third_party/clone_build_install_unity_deps.sh

Open in an editor the file build.sh located in the root folder of the project, and modify the line 29:
from -> ANDROID_NDK="~/android-ndk-r15c/"
to -> ANDROID_NDK="./android-ndk-r15c/"

Then go back to the command line and type:
./build.sh -t=UNITY_PLUGIN --android_toolchain

The generated resonance-audio plugin for unity will not work on Windows ( and in fact in the folder ResonanceAudio/Plugins/x86_64 there is not the audiopluginresonanceaudio.dll ), but the ResonanceAudio/Plugins/Android/libs/armeabi-v7a/libaudiopluginresonanceaudio.so has been correctly generated and it works on android devices.
So you just need to copy the folder ResonanceAudio/Plugins/Android and paste it in the resonance-audio-unity-plugin compiled in Windows (where you have a working audiopluginresonanceaudio.dll but not the libaudiopluginresonanceaudio.so)

It would be great to have a Windows version of resonance-audio unity plugin with android support without having to compile it twice. Could someone help?

I add this information: when I made the cross-compiling from Windows through the linux subsystem it was much faster than the compiling for Windows (from Windows, through git bash). So, the cross compiling for Windows, from the linux subsystem of Windows itself, could be, maybe, an interesting solution.

Thanks and regards,
Paino

Linker error on macOS

I am trying to integrate the C++ library directly, without using the plugins for Unity or FMOD.

When trying to link the ResonanceAudioObj library, I get the following error:

error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool: can't open file: /Users/Ravbug/Documents/RavEngine-Samples/build/RavEngine/deps/resonance-audio/resonance_audio/RavEngine_Samples.build/debug/ResonanceAudioObj.build/Objects-normal/x86_64/utils.o (No such file or directory)
Command Libtool failed with a nonzero exit code

The compilation of the library itself succeeds, but it cannot find utils.o when linking.

Here is how I am configuring it with cmake. I ran clone_core_deps.sh, but not the other two scripts.

set(BUILD_RESONANCE_AUDIO_API ON CACHE INTERNAL "")
add_subdirectory("${DEPS_DIR}/resonance-audio")

# ....

target_link_libraries("${PROJECT_NAME}" 
	PUBLIC
	"ResonanceAudioObj"
)

I am using the Xcode 12 generator on macOS 11.0.

libaudiopluginresonanceaudio

Dear all,
I have compiled resonance audio with UNITY_PLUGIN target and android_toolchain option.

In the path: ResonanceAudio\Plugins\Android\libs\armeabi-v7a
the created .so is 8350 KB , the original one download from here (https://github.com/resonance-audio/resonance-audio-unity-sdk/tree/master/Assets/ResonanceAudio/Plugins/Android/libs/armeabi-v7a) is only 450 KB.

When I build my unity app with the original version the audio is spatialized and works well.
Instead If I build the unity app with my own compiled version, audio lags.

Could someone explain how to recompile obtaining the same result as the resonance-audio-unity-sdk ?

Thanks and regards

FillInterleavedOutputBuffer outputs garbage values initially

For the first fraction of a second after initialising Resonance, FillInterleavedOutputBuffer outputs garbage float values, resulting in very nasty sounds.

for example:

buf[0]: -107374180
buf[1]: -107374180
buf[2]: -107374180
buf[3]: -107374180
buf[4]: -107374180
buf[5]: -107374180
buf[6]: -107374180
buf[7]: -107374180
buf[8]: -107374180
buf[9]: -107374180

I'm guessing there's uninitialised data being fed into the algorithm from Resonance code.

pathnames in Matlab HRIR script

Hello,

I'm using the Matlab scripts provided here to regenerate spherical harmonic encoded hrir wav files for additional subjects in the Sadie database. When I first ran generatesadieshhrirs.m, still for "Subject 2", I got a "Name is nonexistent or not a directory" error in Matlab.

I made two changes to filepath names and now everything is working:

  1. In shhrirsymmetric.m, I changed addpath( '../ambisonics/ambix/'); addpath( '../ambisonics/shelf_filters/'); to addpath( '../../ambisonics/ambix/'); addpath( '../../ambisonics/shelf_filters/'); (lines 34-35)

  2. In shbinauralrendersymmetric.m, I changed line 39 from addpath( '../ambisonics/ambix/'); to addpath( '../../ambisonics/ambix/);

Am I using the script in a manner different than what was intended? Or is this just a small legacy error? Obviously if I run the constituent scripts from their own directories, they'll work just fine, but since they're being called by generatesadieshhrirs, we seem to be in a different directory than intended. Could there instead be a variable that contains the path to the matlab directory and then hard-code the rest of the path to specific files? This way, we can run any script from any directory inside matlab/?

In any case, thank you for the wonderful code-- it's so elegant and intuitive, and very fun to use!

Does pffft have a license?

I am trying to understand what dependency license exists for this project. It does not seem that the bitbucket link to pffft contains a license file. Does anyone know what license pffft uses?

Simulate propagation delay?

Does Resonance simulate propagation delay? I'm using Unity if that matters.

I initially suspected that Unity's doppler support would use a variable delay line (that seems like the most straightforward way to implement doppler, and also as a side-effect would delay based on distance) but from what I can tell that doesn't seem to be the case.

I'm working on an outdoor simulation using real recorded audio data, so it's important to have a handle on what aspects of propagation are being modeled.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.