Valve Corporation
Steam Audio supports Windows (32 bit and 64 bit), Linux (32 bit and 64 bit), macOS, Android (armv7, arm64, x86, x64), and iOS platforms.
Steam Audio supports Unity 2017.3+, Unreal Engine 4.27+, and FMOD Studio 2.0+.
Steam Audio
Home Page: https://valvesoftware.github.io/steam-audio/
License: Apache License 2.0
Valve Corporation
Steam Audio supports Windows (32 bit and 64 bit), Linux (32 bit and 64 bit), macOS, Android (armv7, arm64, x86, x64), and iOS platforms.
Steam Audio supports Unity 2017.3+, Unreal Engine 4.27+, and FMOD Studio 2.0+.
I've been having issues with exporting a scene in 4.16 and 4.17. At the moment in 4.17 my only option in the drop down menu next to build is 'Steam Audio: Bake Indirect Sound..'
I've followed the instructions for setting up the scene - creating the shortcut with -audiomixer and then tagging objects in the scene but just wondering what has happened to the export scene option/how to now get it to work?
Reported via Steam Audio Community Forum -
http://steamcommunity.com/app/596420/discussions/0/1290691937711370700/
Game Tab needs to be visible for EndOfFrameUpdate function to be called to update source and listener position.
Hi,
I'm developing gstreamer plugin based on phonon C API, I created 1st order test file that I'm able to play using iplApplyAmbisonicsBinauralEffect and it gets rendered properly, but if I try to rotate the the ambisonics data using: iplRotateAmbisonicsAudioBuffer before i apply the effect nothing happens to the data in the output buffer (tried both in place rotation and separate in and out buffers, rotator is initialized and rotation set). What I do looks more or less like:
//once at beginning:
iplCreateBinauralRenderer
iplCreateAmbisonicsBinauralEffect
iplCreateAmbisonicsRotator
//each time the viewer moves/rotates
iplSetAmbisonicsRotation // the quat is normalized.
//on each audio frame:
iplRotateAmbisonicsAudioBuffer
iplApplyAmbisonicsBinauralEffect
I tried rotation in each axis but it looks like the data remains untouched.
many thanks in advance
mecowhy
Are there any plans to support alternative output formats for the samples like s16?
I couldn't see anything in the API about modelling ITD/ILD, is this planned or is there another technology that can be used in conjunction with steam audio to achieve it?
First off, thanks for releasing this product, it's great to see a first class free cross platform audio library! However, similar to other projects in the ValveSoftware org (OpenVR, vogl, Source SDK, etc.) that come bundled with source code, it would be great to see steam-audio released with source code, not only the binaries and headers.
This would allow the community to provide patches for bugs, submit PRs for providing additional functionality, and help out with porting to other platforms. I'm not sure if this is on the roadmap at all for the library or if there are licensing issues involved preventing this.
I downloaded the chm from the releases page, opened it up, i see the Contents on the left, but clicking on any of the pages seems to load nothing but a blank white page.
There are popping artifacts when using different DSP Buffer Sizes than Default in Unity. We are able to reproduce the popping for DSP Buffer Size set to Best Latency.
There is a problem with the Unity integration that when you use AudioSource.PlayOneShot()
(https://docs.unity3d.com/ScriptReference/AudioSource.PlayOneShot.html) the PhononAudioSource doesn't seem to have any effect.
Since using PlayOneShot()
is probably the most common way to play sounds in Unity this seems to be a rather important issue.
(Using Beta5 version)
I would like to use the C API to accurately simulate how indirect sound behaves in a direction-dependent way in an arbitrary room. For example, this setup:
----------
| s |
|----- |
| r |
----------
...with a single sound receiver and a single sound source. Direct sound is blocked by a wall, so most sound should reach the receiver from the right side, due to the reflection along the right-side wall. I also want to simulate which direction the source sound is facing, so the sound that the receiver gets would be more muffled if the source is facing left, compared to facing right.
I tried to use a mesh around the source to simulate "direction", but I did not get expected results. I tried completely surrounding the source with a mesh that absorbs all sound, but it didn't get blocked. In fact, I was sometimes able to get sound coming in louder from the left of the receiver!
Can this library support this scenario?
If the indirect sound produced by Steam Audio isn't what you'd expect, it can be hard to track down the cause.
Use multiple CPU threads when baking in the Unity editor.
Allow a slider to scale scene in Unity.
I have been trying to figure out how many sources with convolution could be in a scene and how many with only the direct sound and so on. I noticed that even if there is no convolution effect at all, the maxConvolutionSources value has a big impact on the execution speed of iplGetMixedEnvironmentalAudio. It is fast when set to 0 or 1 or even 5, but higher than that I start getting a lot of cracking independently of the actual number of convolution effects in use.
I would expect this setting to only effect memory consumption (since you need memory for each source to store the samples for a time of irDuration or something like that) and not the speed.
If for some internal reason, this is to be expected, you should mention it in the documentation.
Hi, I added phonon effects to all our ambient sounds in one of our levels. That broke the sound engine completely. I notice that the threshold is 8 effects, it works with 8, the ninth breaks the engine.
Hi, apologies in advance for opening an issue essentially to ask a question, I'm limited in terms of places to turn for guidance on this one!
I know this feature isn't technically supported, but I've been trying to this plugin to work with my own custom HRTFs in Unity. I think I sorta understand how it's supposed to work, from a combination of the guide posted on the page for the app on Steam and trial and error, but I've hit something of a dead end.
I have three functions (assigned to the _____callback variables in the HrtfParams):
public void onLoadHrtf(int numSamples, int numSpectrumSamples, Phonon.FFTHelper fft, System.IntPtr data)
public void onUnloadHrtf()
public void onLookupHrtf(System.IntPtr direction, System.IntPtr leftHrtf, System.IntPtr rightHrtf)
However, the problem is that I can't find any way to access my HRTF set once I've loaded it using onLoadHrtf! I've checked, as far as I can tell, every class in the Phonon namespace going entirely off Visual Studio prompts that pop up when I start typing "phonon."
I don't know if I'm misunderstanding the intended way of implementing this, or if it's actually impossible to get custom HRTFs working with this plugin at the moment, literally any tips/pointers would be so appreciated - I'm utterly stumped!
Add support in Steam Audio for UWP, specifically for HoloLens builds.
Any idea when the Unreal version will become available? Time estimate?
Cheers, looks amazing !
Whenever I do this, the game lags for 0,5s to 1s. I thought it could be a my pc, but every part in it is running low. Nowhere near maximum performance, the CPU Usage stays at 37%. This appears in build projects as well.
I'm using an Intel Xeon e3 1231 v3 @3,40GHz up to 3,9GHz
To allow sounds to be heard through solid objects (albeit muffled) even when reflection and diffraction paths do not exist, transmission of sound through solid objects should be modeled.
There are multiple calls to FindObjectOfType in Steam Audio Unity plugin which add a significant overhead to load times for an application.
http://steamcommunity.com/app/596420/discussions/0/133260492053776048/
Packaging fails with:
error LNK2019: unresolved external symbol "unsigned int __cdecl SteamAudio::GetNumTrianglesForStaticMesh(class AStaticMeshActor *)"
error LNK2019: unresolved external symbol "unsigned int __cdecl SteamAudio::GetNumTrianglesAtRoot(class AActor *)"
From IPLVector3's detailed description:
Phonon uses a right-handed coordinate system, with the x-axis pointing right, the y-axis pointing up, and the z-axis pointing ahead. Position and direction data obtained from a game engine or audio engine must be properly transformed before being passed to any Phonon API function.
I think that it's wrong, because it's the left-handed coordinate system that has the z-axis pointing ahead.
The right-handed coordinate system's z-axis is pointing backward.
I'm interested to use a custom HRTF for the binaural renderer. I see there is a parameter to set to do this task: IPLbyte* hrtfData
, a pointer to a byte array containing HRTF data in the function IPLAPI IPLerror iplCreateBinauralRenderer(IPLContext context, IPLRenderingSettings renderingSettings, IPLbyte* hrtfData, IPLhandle* renderer);
There is no documentation about the format of this byte array. Anyone can help me?
Currently, geometry needs to be pre-exported during design time and cannot change during the game play. Add support for moving geometry during gameplay in the Unity Plugin for Steam Audio.
There are some limitations when accessing game geometry at runtime in Unity. To avoid them, we require that scene be pre-exported during design time.
Some indicator should be available at a Phonon Source or Phonon Listener component level which tells whether data has been baked for that particular component.
The output of iplApplyAmbisonicsBinauralEffect is quiet and muffled compared to the same signal going through iplApplyBinauralEffect.
It's really odd, but when I look at the spectra, it's almost as if the iplApplyAmbisonicsBinauralEffect spectrum has about 4x less amplitude, and is about 10% squished, like a slight pitch down.
Hi,
Using C API, steamaudio_api_2.0-beta.6, OSX. I am using IPL_HRTFDATABASETYPE_DEFAULT and IPL_CONVOLUTIONTYPE_PHONON.
I hear consistent zipper noise (quiet clicks) when changing the direction of a sound source. The effect is very noticeable with smaller frame sizes (e.g. 64), but is present at a lower frequency at larger frame sizes (e.g. 1024). I imagine this is because the source direction is sampled once per frame via iplApplyBinauralEffect(), and not being interpolated over time? For a VR HMD, nearly all sources' relative directions are dynamic, because the listener's head is moving, so this seems like a big issue. Am I missing something in the SDK? Is the undocumented iplApplyBinauralEffectWithParameters() helpful to deal with this?
Relevant code:
#include "steamaudio_api_2.0-beta.6/include/phonon.h"
struct {
IPLContext context;
IPLRenderingSettings settings;
IPLHrtfParams hrtfParams;
IPLAudioFormat source_format;
IPLAudioFormat output_format;
IPLhandle renderer = 0;
IPLhandle binaural = 0;
// pre-allocated buffers at maximum vector size
IPLfloat32 source_buffer[4096];
IPLfloat32 output_buffer[4096 * 2];
} phonon;
void init(double samplerate = 44100, int framesize = 64) {
// default position in front of listener, to avoid 0,0,0
direction.x = 0;
direction.y = 0;
direction.z = -1;
phonon.context.allocateCallback = 0;
phonon.context.freeCallback = 0;
phonon.context.logCallback = phonon_log_function;
phonon.settings.convolutionType = IPL_CONVOLUTIONTYPE_PHONON;
// various options:
phonon.hrtfParams.type = IPL_HRTFDATABASETYPE_DEFAULT; // or CUSTIOM
phonon.hrtfParams.hrtfData = 0; // Reserved. Must be NULL.
// TODO: allow custom HRTFs; implement these:
phonon.hrtfParams.numHrirSamples = 0;
phonon.hrtfParams.loadCallback = 0;
phonon.hrtfParams.unloadCallback = 0;
phonon.hrtfParams.lookupCallback = 0;
phonon.settings.samplingRate = samplerate;
phonon.settings.frameSize = framesize;
iplCreateBinauralRenderer(phonon.context, phonon.settings, phonon.hrtfParams, &phonon.renderer);
// a single mono source
phonon.source_format.channelLayoutType = IPL_CHANNELLAYOUTTYPE_SPEAKERS;
phonon.source_format.channelLayout = IPL_CHANNELLAYOUT_MONO;
phonon.source_format.numSpeakers = 1;
phonon.source_format.channelOrder = IPL_CHANNELORDER_INTERLEAVED;
phonon.output_format.channelLayoutType = IPL_CHANNELLAYOUTTYPE_SPEAKERS;
phonon.output_format.channelLayout = IPL_CHANNELLAYOUT_STEREO;
phonon.output_format.numSpeakers = 2;
phonon.output_format.channelOrder = IPL_CHANNELORDER_INTERLEAVED;
iplCreateBinauralEffect(phonon.renderer, phonon.source_format, phonon.output_format, &phonon.binaural);
}
void perform(double **ins, long numins, double **outs, long numouts, long sampleframes) {
// phonon uses float32 processing, so we need to copy :-(
IPLAudioBuffer outbuffer;
outbuffer.format = phonon.output_format;
outbuffer.numSamples = sampleframes;
outbuffer.interleavedBuffer = phonon.output_buffer;
IPLAudioBuffer inbuffer;
inbuffer.format = phonon.source_format;
inbuffer.numSamples = sampleframes;
inbuffer.interleavedBuffer = phonon.source_buffer;
// copy input:
{
t_double * src = ins[0];
IPLfloat32 * dst = phonon.source_buffer;
int n = sampleframes;
while (n--) { *dst++ = *src++; }
}
// rotate at 3 hz:
static float t = 0.f;
t += M_PI * 2. * 3. * sampleframes/(44100.);
IPLVector3 dir;
dir.x = sin(t);
dir.y = 0.;
dir.z = cos(t);
// Unit vector from the listener to the point source,
// relative to the listener's coordinate system.
glm::vec3 dirn = glm::vec3( sin(t), 0.f, cos(t) );
IPLAudioBuffer outbuffer;
outbuffer.format = phonon.output_format;
outbuffer.numSamples = sampleframes;
outbuffer.interleavedBuffer = phonon.output_buffer;
iplApplyBinauralEffect(phonon.binaural,
inbuffer,
dir,
IPL_HRTFINTERPOLATION_BILINEAR,
outbuffer);
// copy output:
{
IPLfloat32 * src = phonon.output_buffer;
t_double * dst0 = outs[0];
t_double * dst1 = outs[1];
int n = sampleframes;
while (n--) {
*dst0++ = *src++;
*dst1++ = *src++;
}
}
}
Currently you can't really do underwater scenes properly with steam audio, since it looks like the speed of sound is hardcoded and cannot be changed(?).
It would be useful to have an additional property in the IPLMaterial struct, which allows us to change the speed of sound for all sounds travelling through a mesh with such a material.
I am using the steam audio bindings from Rust, which is immutable by default.
Would it be possible to annotate which pointers are const in phonon.h
?
Hello, I have noticed on the website there is no mention of Valve's Source Engine although the promotional picture on the main page shows a screenshot of Counter Strike: Global Offensive
Are there any plans to bring Steam Audio support for developers of Source 1?
Allow transmission support for sound reflection.
Some sort of indicator which tells user whether they need to rebake data for a Phonon Source or a Phonon Listener component dues to changes in the scene.
When turning the real time settings too high, the game crashes on exit - when playing in the editor it takes the whole unity editor with it.
While playing there is no problem (it's even stable and fluid), but as soon I try to exit, it crashes every time. The exact point where it crashes seems to be different from scene to scene and from pc to pc (e.g. after it crashed when using my notebook it worked on my PC with the same settings).
One example:
This settings work like a charm both when playing and on exit...
...while this one works like a charm when playing, but it crashes on exit every time (together with the whole unity editor!)
I had this problem several times, the only workaround seems to be lowering the settings. But that's no solution, since my PC should be powerful enough to use a bit higher settings, since it's perfectly fluid while playing.
My environment:
Windows 10 Pro 64-bit (10.0, Build 15063) (15063.rs2_release.170317-1834)
Intel(R) Xeon(R) CPU E3-1231 v3 @ 3.40GHz (8 CPUs), ~3.4GHz
8192MB RAM
Allow changing material associated with the scene geometry in during the gameplay or in realtime.
Sounds that are occluded by pillars, walls, etc. sound too aggressively attenuated, because diffraction is not simulated. Adding support for simulating diffraction along with reflections should help reduce some of these issues.
In the Unity plugin, the Audio Source can be configured to use a non-physical distance attenuation curve, and the Phonon Effect can be configured to use that curve. However, sound propagation simulation does not seem to be using the custom attenuation curve properly.
Hi all,
I don't know if it is just me, but I cannot find a proper difference in the room acoustics when changing materials. For example, metallic materials should turn to a more reverberant environment because the attenuation rate is lower; with carpet material, the other way around.
I have also tried the Custom material and, in my opinion, I can't see a difference.
Am I doing something wrong? Because that is not working for me.
Thank you!
As soon after I create an apk for Android, only the occlusion continues to work. Any reverb/material settings and so on just stop working. This affects the android application as well as the editor.
After I compiled the apk, I only can clean all of unity's temporary files via git clean -fx or make a compile a build for windows to get it working in the editor again. But that doesn't solve the problem that I can't use Steam Audio on Android properly.
Room A with a gameobject "Walls" (full of wall children elements) and phonon geometry with "export all children enabled" on it, and phonon material -> works well with occlusion (all realtime)
A second Room B, with same structure -> no sound at all (when occlusion enable)
Second room B works if i remove Wall tree from A
The following functions don't seem to be documented in the header file or in the provided documentation file.
IPLAPI IPLerror iplCreateSimulationData(IPLSimulationSettings simulationSettings,
IPLRenderingSettings renderingSettings, IPLhandle* simulationData);
IPLAPI IPLvoid iplDestroySimulationData(IPLhandle* simulationData);
IPLAPI IPLint32 iplGetNumIrSamples(IPLhandle simulationData);
IPLAPI IPLint32 iplGetNumIrChannels(IPLhandle simulationData);
IPLAPI IPLvoid iplGenerateSimulationData(IPLhandle simulationData, IPLhandle environment,
IPLVector3 listenerPosition, IPLVector3 listenerAhead, IPLVector3 listenerUp, IPLVector3* sources);
IPLAPI IPLvoid iplGetSimulationResult(IPLhandle simulationData, IPLint32 sourceIndex, IPLint32 channel,
IPLfloat32* buffer);
There should be a reminder for pre-exporting the scene or automatic pre-export (i don't know how long it will take when pre-exporting larger scenes, so maybe at least an option for that?)
We often forget the pre-export and then either we wonder why changes don't have an effect or the unity editor crashes instead.
Is it possible to add some kind of notification when the pre-export should be done?
Allow all or a subset of Phonon Source or Phonon Listener effect to be baked with a single Bake button.
Users are observing Buzz artifact when teleporting characters or the player.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.