Coder Social home page Coder Social logo

tagbox's Introduction

TagBox

Steer OpenAI's Jukebox with Music Taggers!

The closest thing we have to VQGAN+CLIP for music!

Unsupervised Source Separation By Steering Pretrained Music Models

Read the paper here. Submitted to ICASSP 2022.

Abstract

We showcase an unsupervised method that repurposes deep models trained for music generation and music tagging for audio source separation, without any retraining. An audio generation model is conditioned on an input mixture, producing a latent encoding of the audio used to generate audio. This generated audio is fed to a pretrained music tagger that creates source labels. The cross-entropy loss between the tag distribution for the generated audio and a predefined distribution for an isolated source is used to guide gradient ascent in the (unchanging) latent space of the generative model. This system does not update the weights of the generative model or the tagger, and only relies on moving through the generative model's latent space to produce separated sources. We use OpenAI's Jukebox as the pretrained generative model, and we couple it with four kinds of pretrained music taggers (two architectures and two tagging datasets). Experimental results on two source separation datasets, show this approach can produce separation estimates for a wider variety of sources than any tested supervised or unsupervised system. This work points to the vast and heretofore untapped potential of large pretrained music models for audio-to-audio tasks like source separation.

Try it yourself!

Click here to see our Github repository.

Run it yourself Colab notebook here: Open in Colab

Example Output โ€” Separation

Audio examples are not displayed on https://github.com/ethman/tagbox, please click here to see the demo page.

TagBox excels in separating prominent melodies from within sparse mixtures.

Wonderwall by Oasis - Vocal Separation

Mixture


TagBox Output

hyperparam setting
fft size(s) 512, 1024, 2048
lr 10.0
steps 200
tagger model(s) fcn, hcnn, musicnn
tagger data MTAT
selected tags All vocal tags

Howl's Moving Castle, Piano & Violin Duet - Violin Separation

Mixture


TagBox Output

hyperparam setting
fft size(s) 512, 1024, 2048
lr 10.0
steps 100
tagger model(s) fcn, hcnn, musicnn
tagger data MTG-Jamendo
selected tags Violin

Smoke On The Water, by Deep Purple - Vocal Separation

Mixture


TagBox Output

hyperparam setting
fft size(s) 512, 1024, 2048
lr 5.0
steps 200
tagger model(s) fcn, hcnn
tagger data MTAT
selected tags All vocal tags

Example Output - Improving Perceptual Output & "Style Transfer"

Adding multiple FFT sizes helps with perceptual quality

Similar to multi-scale spectral losses, when we use masks with multiple FFT sizes we notice that the quality of the output increases.

Mixture


TagBox with fft_size=[1024]

Notice the warbling effects in the following example:


TagBox with fft_size=[1024, 2048]

Those warbling effects are mitigated by using two fft sizes:

These results, however, are not reflected in the SDR evaluation metrics.

"Style Transfer"

Remove the masking step enables Jukebox to generate any audio that will optimize the tag. In some situations, TagBox will pick out the melody and resynthesize it. But it adds lots of artifacts, making it sound like the audio was recorded in a snowstorm.

Mixture


"Style Transfer"

Here, we optimize the "guitar" tag without the mask. Notice that the "All it says to you" melody sounds like a guitar being plucked in a snowstorm:



Cite

If you use this your academic research, please cite the following:

@misc{manilow2021unsupervised,
  title={Unsupervised Source Separation By Steering Pretrained Music Models}, 
  author={Ethan Manilow and Patrick O'Reilly and Prem Seetharaman and Bryan Pardo},
  year={2021},
  eprint={2110.13071},
  archivePrefix={arXiv},
  primaryClass={cs.SD}
}

tagbox's People

Contributors

ethman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

tagbox's Issues

Guidance on reproducing reported SDRi in the paper

Hi,

First of all, great work! And thanks for sharing the code, much appreciated!

I'm trying to reproduce the vocal part of the MUSDB18 results shown in Table 1 reported in the paper. But I got really bad results for SDRi.

(1) For the data preprocessing part, I cut the original mixture into 5 second segments where vocals are activated, (in some cases vocal part are only silence);

(2) For separation, I'm using the code snippet from the colab in the repo. In my implementation, my parameters are:

TAGGER_SR = 16000  # Hz
JUKEBOX_SAMPLE_RATE = 44100  # Hz

# tagger source
tagger_training_data = 'MagnaTagATune' #@param ["MTG-Jamendo", "MagnaTagATune"] {allow-input: false}
tag = 'Vocals'

# audio processing parameters
fft512 = True 
fft1024 = True 
fft2048 = True 

n_ffts = []
if fft512:
    n_ffts.append(512)
if fft1024:
    n_ffts.append(1024)
if fft2048:
    n_ffts.append(2048)

# network architecture selections
fcn = True #@param {type:"boolean"}
hcnn = True #@param {type:"boolean"} 
musicnn = True #@param {type:"boolean"}
crnn = False #@param {type:"boolean"}
sample = False #@param {type:"boolean"}
se = False #@param {type:"boolean"}
attention = False #@param {type:"boolean"}
short = False #@param {type:"boolean"}
short_res = False #@param {type:"boolean"}

# separation paras
use_mask = True
lr = 5.0  
steps = 30 

(3) For evaluating SDRi, I'm using asteriod package instead of museval (when using museval, SDR can be easily changed by multiplying some scaler to the audio samples even not evaluating SI-SDR).

(4) Also I'm using the saved *_masked.wav files to compute SDR (actually *_raw_masked.wav get higher SDR)

So I'm wondering which step could possibly be the problem cause the bad results? Thank you so much!

Ignore non-instrument tags in loss calculation

Right now we set the GT tags of all non-instrument tags to 0.0, but still compute a loss on the JBX'd audio for those tags, so TagBox creates audio that sets makes the non-instrument tags 0.0 too. But we really should ignore the non-instrument tags all together. So we should set the weight of non-instrument tags to 0.0 when calculating loss for every iteration.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.