Coder Social home page Coder Social logo

crepe's Introduction

CREPE Pitch Tracker

CREPE is a monophonic pitch tracker based on a deep convolutional neural network operating directly on the time-domain waveform input. CREPE is state-of-the-art (as of early 2018), outperfoming popular pitch trackers such as pYIN and SWIPE:

Further details are provided in the following paper:

CREPE: A Convolutional Representation for Pitch Estimation
Jong Wook Kim, Justin Salamon, Peter Li, Juan Pablo Bello.
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2018.

We kindly request that academic publications making use of CREPE cite the aforementioned paper.

Using CREPE

CREPE requires the following python dependencies:

  • numpy & scipy
  • keras (only tested with the TensorFlow backend)
  • h5py

This repository includes a pre-trained version of the CREPE model for easy use. To estimate the pitch of audio_file.wav and save the output to audio_file.f0.tsv as tab separated values, run:

$ python crepe.py audio_file.wav audio_file.f0.tsv

Currently we do not support importing CREPE as a python module, though this functionality may be added in a future version.

Please note

  • The current version only supports wav files as input.
  • While in principle the code should run with any Keras backend, it has only been tested with the TensorFlow backend. The model was trained using Keras 2.1.3 and TensorFlow 1.5.0.
  • The pre-trained model included was trained on the MDB-STEM-Synth dataset [1], which contains vocal and instrumental recordings. It is therefore expected to work best on this type of audio signals.
  • Prediction is significantly faster if Keras (and the corresponding backend) is configured to run on GPU.
  • The current pre-trained model does not perform voicing activity detection (VAD), meaning it will return a continuous series of frequency values, including for sections of the recording where there is no pitch (e.g. silence). The output for such unpitched/silent segments will be arbitrary.

References

[1] J. Salamon et al. "An Analysis/Synthesis Framework for Automatic F0 Annotation of Multitrack Datasets." Proceedings of the International Society for Music Information Retrieval (ISMIR) Conference. 2017.

crepe's People

Contributors

jongwook avatar justinsalamon avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.