Coder Social home page Coder Social logo

sonosthesia-documentation's Introduction

Sonosthesia

This is the landing page for the Sonosthesia project. It gives a high level overview of the different components and planned evolutions. It is founded and principally driven by Jonathan Thorpe inspired by an old vision of XR as an artistic audio/visual performance environment, the first draft of which dates back to 2013. After a long pause while working fulltime as software architect for no code XR creation tool Minsar and sadly seeing die due to lack of funds the time has come for a second draft with many lessons learned. Chief amongst these is the need to integrate within existing tech stacks rather than trying to replace them.

Summary

Ongoing Projects

Real time audio reactivity

Audio reactivity is a powerful mechanism which can bring life to a real time scene (XR or plain 3D), the core of the approach is to extract sound descriptors from sound in order to drive other aspects of the scene. These can be procedural graphics, shader or VFX parameters, procedural movement or physics, lighting or whatever the need might be. A number of different approaches are being investigated to ease this process:

  • Accessing audio data for AudioSource or AudioListener and extracting energy in octave bands. See com.sonosthesia.audio. This can be used for timeline audio tracks playing through AudioSource components.
  • Extracting audio analysis ahead of time and saving on performance at runtime, using the audio-pipeline and playing them back on the unity timeline using custom tracks and assets.
  • In the context of live musical performance using external DAWs such as Live or Logic Pro, audio analysis is performed in plugins (currently only implemented in Max 4 Live) and relayed to clients (currently Unity) using the node connector app. This allows graphics to in real time to ongoing musical performances.
  • (Planned) using core features of FMOD to run audio analysis on audio streams, allowing audio analysis to run on interactive audio which can be controlled by user interactions.
  • (Planned) plugin into host system music playing apps and system sound to allow reactive visuals to be driven by user content. The feasability of on the fly source seperation (to isolate different instruments) will be investigated.

Briding the gap from Game Engines to DAWs

While it's minimalist and old, MIDI is still the best option we have for controlling music software remotely. This option has been exploited by Virtuoso-VR and MoveMusic amongst others. Sonosthesia aims to expand the bridge, in particular exploiting the opportunities afforded by Max 4 Live to allow more intricate bilateral interactions between XR controllers and music production software. This includes the DAWs broadcasting state which is not easly mapped to MIDI such as:

  • Spectral analysis parameters.
  • Currently playing clips.
  • Automation curve values.
  • Detailed transport and time signature.

A demo app is available and will be updated regularly

Keyboard Builder

Keyboard Builder

Performance environment design

Building on the environment design tools knowledge acquired while working on Minsar, Sonosthesia aims to provide useful generation tools for instruments such as keyboards, harps, xylophone, drum sets as well as more exotic forms afforded by the dematerialized nature of XR environments. These generation tools focus on:

  • XR playability using controllers and hand tracking.
  • Spatial and musical configurability (scale, sensitivity etc...).
  • Real time channel and per note musical expression using mappings.
  • Affordances providing real time visual and haptic feedback to increase presence.

Keyboard Builder<

Keyboard Builder

Tools for performance procedural graphics

Pushing the limits of procedural graphics within the constraints imposed by the high FPS required for smooth XR experience, Sonosthesia nourishes itself from the top content providers such as Catlike Coding, Keijiro, Gabriel Aguiar expanding on their spark while allowing intricate real time parameter control using sound descriptors or other signals. Avenues currently being investigated include:

  • Exploring dynamic noise generators.
  • Procedural shape fragment shaders with mapped parameters.
  • VFX graphs with mapped parameters and event attributes.
  • Procedural mesh generation and deformations using the Unity Job System.
  • ECS based animations drivers.

A Unity demo app demonstrates some of the techniques with results compiled in a YouTube playlist.

Planned evolutions

JUCE based audio plugins

Max 4 Live presents huge advantages in terms of processing and control possibilities as well as extermely fast iteration times. As such it is the perfect tool for experimentation, ideation and development. It has the massive drawback of only running in Ableton Live (and only the Max encompassing Suite version at that). As the tools and requirements mature, reviving old JUCE based MIDI relay and Audio Analysis plugins to align with the evolving Sonosthesia protocols (based on OSC/UDP, websockets and MessagePack) will allow the same opportunities to be available on DAWs supporting VST and AU (as in all of them).

Integrated sound synthesis

Remotely controlling sound synthesizer running within DAWs is very powerful and enabling for musicians and domain specialists but not an avenue for widespread public interest. In order for Sonosthesia to spread to a wider audience the sound generation mechanisms must run within the XR application itself. Sadly Unity has little to offer in this field, their DSP graph has not evolved from its experimental stage. There are several avenues which can be explored to integrate the sound generation mechanisms into standalone apps with greater reach.

  • Using FMOD with prebounced multilayered sonic hits and textures which can be modulated and mixed at runtime.
  • Porting essential project components to Unreal Engines and using MetaSounds which contains enough sound synthesis building blocks to make a proper synthesizer without sinking all available project resources.
  • Making an Apple/Unity hybrid app (a concept explored at length while working on Wonderland) for Apple Vision Pro and using Audio Kit on the Apple side to handle sound synthesis.
  • Hoping that Unity give their developers some better tools to handle audio synthesis instead of relying on Keijiro to do all the work.

Full blown audio visual compositions

Pulling together the different aspects of the project into standalone XR apps which adapt procedurally generated visuals to the users environment (using spatial mapping data), while using a mixture of baked and realtime interactive sonic generation to drive them is an exciting prospect. Reviving old electronic compositions such as Starfall and Eurus giving them an XR twist is on the TODO list.

Potential revenue streams

Currently the project is only driven by personal time and money investment from Jonathan Thorpe which will only be sustainable for another year or two. Beyond that, revenue streams will be required for further work. A number of avenues are considered to generate them:

  • A steady stream of cheap but non free XR apps containing audio visual compositions sold on app stores, similar to how consumers were once expected to buy audio tracks on music hosting websites before the days of streaming.
  • Cooperations with XR studios wishing to enrich their productions with the intricate audio visual symbiosis offered by Sonosthesia, with dedicated support.
  • Cooperations with music software companies wishing to provide inovating sound synthesis control mechanisms to their users, with XR instruments and control platforms tailored to specific instruments and racks.
  • Government or industry funding opportunities will also be explored, they kept Minsar alive for a good few years so there is always some hope.

sonosthesia-documentation's People

Contributors

jbat100 avatar jonathan-opuscope avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.