Coder Social home page Coder Social logo

dlepaux / realtime-bpm-analyzer Goto Github PK

View Code? Open in Web Editor NEW
203.0 4.0 13.0 43.68 MB

Library using WebAudioAPI to analyse BPM from files, audionodes. It's also able to compute BPM from streams as well as realtime using a microphone. This tool might be useful for music producers and DJs or anybody that wants to get BPM from any music source.

Home Page: https://www.realtime-bpm-analyzer.com

License: Apache License 2.0

JavaScript 1.48% Shell 3.24% TypeScript 94.96% HTML 0.25% CSS 0.07%
bpm audio realtime webaudio-api tempo realtime-bpm audiocontext audionode audiobuffer javascript

realtime-bpm-analyzer's Introduction

Realtime BPM Analyzer

XO code style npm npm CI Actions Status codecov Join the chat at https://gitter.im/realtime-bpm-analyzer/Lobby

Welcome to Realtime BPM Analyzer, a powerful and easy-to-use TypeScript/JavaScript library for detecting the beats-per-minute (BPM) of an audio or video source in real-time.

Getting started

To install this module to your project, just launch the command below:

npm install realtime-bpm-analyzer

To learn more about how to use the library, you can check out the documentation.

Features

  • Dependency-free library that utilizes the Web Audio API to analyze audio or video sources and provide accurate BPM detection.
  • Can be used to analyze audio or video nodes, streams as well as files.
  • Allows you to compute the BPM while the audio or video is playing.
  • Lightweight and easy to use, making it a great option for web-based music production and DJing applications.
  • Supports MP3, FLAC and WAV formats.

Usages

If you encounter issues along the way, remember I have a chat and the new discussion feature of github !

Player strategy

Measure or detect the BPM from a web player.

This example shows how to deal with a simple audio node.

  1. An AudioNode to analyze. So something like this:
<audio src="./new_order-blue_monday.mp3" id="track"></audio>
  1. Create the AudioWorkletProcessor with createRealTimeBpmProcessor, create and pipe the filters to the AudioWorkletNode (realtimeAnalyzerNode).
import { createRealTimeBpmProcessor, getBiquadFilter } from 'realtime-bpm-analyzer';

const realtimeAnalyzerNode = await createRealTimeBpmProcessor(audioContext);

// Set the source with the HTML Audio Node
const track = document.getElementById('track');
const source = audioContext.createMediaElementSource(track);
const lowpass = getBiquadFilter(audioContext);

// Connect nodes together
source.connect(lowpass).connect(realtimeAnalyzerNode);
source.connect(audioContext.destination);

realtimeAnalyzerNode.port.onmessage = (event) => {
  if (event.data.message === 'BPM') {
    console.log('BPM', event.data.data.bpm);
  }
  if (event.data.message === 'BPM_STABLE') {
    console.log('BPM_STABLE', event.data.data.bpm);
  }
};

Continuous Analysis strategy

Analyze the BPM of a source continuously. This feature is quite simple and basically get rid of all data inside the analyzer after the stabilizationTime is reached. Why ? Because we store all the "valid peaks" for each thresholds in order to find the best candidates. And we would keep all those data forever, we would have memory leaks.

Note: This approach is NOT recommended if you are using a microphone as source. Except if the microphone gets correct audio source. Typically, if the BPM is never computed using this approach, you probably capture low intensity audio with your microphone (too far from the source, too much noise, directional microphone could be reasons why it's not working).

  1. Streams can be played with AudioNode, so the approach is quite similar to the Player strategy.
<audio src="https://ssl1.viastreaming.net:7005/;listen.mp3" id="track"></audio>

Thank you IbizaSonica for the stream.

  1. As for the Player strategy, except that we need to turn on the continuousAnalysis flag to periodically delete collected data.
import { createRealTimeBpmProcessor, getBiquadFilter } from 'realtime-bpm-analyzer';

const realtimeAnalyzerNode = await createRealTimeBpmProcessor(audioContext, {
  continuousAnalysis: true,
  stabilizationTime: 20_000, // Default value is 20_000ms after what the library will automatically delete all collected data and restart analyzing BPM
});

// Set the source with the HTML Audio Node
const track = document.getElementById('track');
const source = audioContext.createMediaElementSource(track);
const lowpass = getBiquadFilter(audioContext);

// Connect nodes together
source.connect(lowpass).connect(realtimeAnalyzerNode);
source.connect(audioContext.destination);

realtimeAnalyzerNode.port.onmessage = (event) => {
  if (event.data.message === 'BPM') {
    console.log('BPM', event.data.data.bpm);
  }
  if (event.data.message === 'BPM_STABLE') {
    console.log('BPM_STABLE', event.data.data.bpm);
  }
};

Local/Offline strategy

Analyze the BPM from files located on your desktop, tablet or mobile!

  1. Import the library
import * as realtimeBpm from 'realtime-bpm-analyzer';
  1. Use an input[type=file] to get the files you want.
<input type="file" accept="wav,mp3,flac" onChange={event => this.onFileChange(event)}/>
  1. You can listen to the change event like so, and analyze the BPM of the selected files. You don't need to be connected to the Internet for this to work.
function onFileChange(event) {
  const audioContext = new AudioContext();
  // Get the first file from the list
  const file = event.target.files[0];
  const reader = new FileReader();
  reader.addEventListener('load', () => {
    // The file is uploaded, now we decode it
    audioContext.decodeAudioData(reader.result, audioBuffer => {
      // The result is passed to the analyzer
      realtimeBpm.analyzeFullBuffer(audioBuffer).then(topCandidates => {
        // Do something with the BPM
        console.log('topCandidates', topCandidates);
      });
    });
  });
  reader.readAsArrayBuffer(file);
};

Development

Realtime BPM Analyzer is using Web Test Runner to handle Unit tests and Web Dev Server to handle Dataset testing.

You will first need to install the project following these commands:

npm install
npm run prepare

Unit Tests

To run the unit tests, you just have to run npm test.

Dataset Testing

To test a whole dataset of audio files you have to drop those files into testing/datasets and then run: npm run testing:prepare to create a manifest that contains the file name and the BPM (typically the verified one) from the metadata.

{
  "my music file.mp3": 130
}

You also need to build the library with npm run build.

Once those steps are done you can run npm run testing to challenge the library against your dataset. The Local/Offline strategy is used here.

New features

If you're developing a new feature and you want to test it in another project, you can run bin/build/to-project.sh nameOfTheProject. This command will create package with npm pack and copy it into the target project nameOfTheProject sitting next to this one. You will then be able to test a production-like package.

Technical Documentation

TypeDoc is used to build the technical documentation of Realtime BPM Analyzer. To build the documentation run: npm run build:docs;

Commercial Usage

This library is distributed under the terms of the Apache License, Version 2.0. However, if you are interested in using this library in a commercial context or require a commercial license, please feel free to contact me.

For inquiries regarding commercial usage, licensing options, or any other questions, please contact me:

David Lepaux
[email protected]

I am open to discussing custom licensing arrangements to meet your specific needs and ensure that you can use this library in your commercial projects with confidence.

Roadmap

  • Add confidence level of Tempo
  • Combine Amplitude Thresholding strategy with others to improve BPM accuracy
  • Improve the continuous analysis in order to ignore drops and cuts
  • Monitor memory usage

Let me know what your most wanted feature is by opening an issue.

Credits

This library was inspired by the Tornqvist project, which was also based on Joe Sullivan's algorithm. Thank you to both of them.

realtime-bpm-analyzer's People

Contributors

dependabot[bot] avatar dlepaux avatar gitter-badger avatar greenkeeper[bot] avatar johnstoecker avatar nikakhachi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

realtime-bpm-analyzer's Issues

createScriptProcessor deprecation

Hey,
awesome job on this!

I had a look through the code and noticed it uses createScriptProcessor to process the audio data.
This seems to be deprecated now and AudioWorklets are the replacement.

Do you have any plans to switch over to those?

Need Help to find BPM

I was trying to find the bpm of a local audio file, but I cannot figure out from where I can get the actual value as shown here. Here is a demo

I also tried to calculate the bpm of audio in real time but still could not figure it out, as shown here

import { createRealTimeBpmProcessor, getBiquadFilters } from 'realtime-bpm-analyzer';

let audioCtx;
let mediaElementSource;

const { type, node } = detectVideo();

if (!node) {
  return;
}

if (!audioCtx) {
  audioCtx = new AudioContext();
}

const realtimeAnalyzerNode = await createRealTimeBpmProcessor(audioCtx);

if (!mediaElementSource) {
  mediaElementSource = audioCtx.createMediaElementSource(node); // node is a YouTube video tag
}

const { lowpass, highpass } = getBiquadFilters(audioCtx);

mediaElementSource.connect(lowpass).connect(highpass).connect(realtimeAnalyzerNode);
mediaElementSource.connect(audioCtx.destination);

realtimeAnalyzerNode.port.postMessage({
  message: 'ASYNC_CONFIGURATION',
  parameters: {
    continuousAnalysis: true,
    stabilizationTime: 5_000,
   },
});

realtimeAnalyzerNode.port.onmessage = (event) => {
  if (event.data.message === 'BPM') {
    console.log('BPM', event.data.result);
  }
  if (event.data.message === 'BPM_STABLE') {
    console.log('BPM_STABLE', event.data.result);
  }
};

ReferenceError: clearTimeout is not defined

I'm trying to get this working with my React app and I am running into the following issue.

I copied ./node_modules/realtime-bpm-analyzer/dist/realtime-bpm-processor.js into my public directory and followed the steps in the Continuous Analysis strategy.

Upon activation, I'm seeing the following error repeating in the console:

realtime-bpm-processor.js:330 ReferenceError: clearTimeout is not defined
    at RealTimeBpmAnalyzer.<anonymous> (realtime-bpm-processor.js:260:11)
    at Generator.next (<anonymous>)
    at fulfilled (realtime-bpm-processor.js:7:26)

Screenshot 2023-04-01 002226

After learning about Audio Worklets, I think this might be a bug. If my understanding is correct, the processor runs in a separate execution context and does not have access to the DOM, therefore setTimeout/clearTimeout will not be available to the processor. Instead, the main thread has to be responsible for managing timers and triggering the stabilization by communicating with the worklet.

Realtime but not continuous analysis ?

I try to use the nice package for a realtime BPM analysis with line / mic input using navigator.getUserMedia

BPM detection works fine, but I don't understand why the value is not updated when input BPM changes.

Maybe I made a mistake in my code (or comprehension), but I think the analyzer does not update BPM continiously.

Starting from functionnal example edit app.js :

'use strict';

const RealTimeBPMAnalyzer = require('realtime-bpm-analyzer');

const App = {
  init() {

    var AudioContext = window.AudioContext || window.webkitAudioContext;
	
	navigator.getUserMedia = ( navigator.getUserMedia ||
                       navigator.webkitGetUserMedia ||
                       navigator.mozGetUserMedia ||
                       navigator.msGetUserMedia);
					   
	var context = new AudioContext();
	// Enable mic / line input watching
	navigator.getUserMedia({audio: true}, onStream.bind(this), function() {});

	function onStream(stream) {
		// Set the source with the HTML Audio Node
		var input = context.createMediaStreamSource(stream);
		// Set the scriptProcessorNode to get PCM data in real time
		var scriptProcessorNode = context.createScriptProcessor(4096, 1, 1);

		// Connect everythings together (do not connect input to context.destination to avoid sound looping)
		scriptProcessorNode.connect(context.destination);
		input.connect(scriptProcessorNode);

		var onAudioProcess = new RealTimeBPMAnalyzer({
			scriptNode: {
				bufferSize: 4096,
				numberOfInputChannels: 1,
				numberOfOutputChannels: 1
			},
			pushTime: 2000,
			pushCallback: function(err, bpm) {
				if(bpm && bpm.length)
					console.log('bpm', bpm[0].tempo);   // <= When stabilized, always displays the same BPM
			}
		});

		// Attach realTime function to audioprocess event.inputBuffer (AudioBuffer)
		scriptProcessorNode.onaudioprocess = function (e) {
			onAudioProcess.analyze(e);
		};
	}
	
  }
}

module.exports = App;

Any idea ?

Source example code for Player strategy needs to be updated

Just a nit pick, but the src code on the readme page for "Player strategy" is missing some imports and definitions. For example, I had to update the import like this
import { createRealTimeBpmProcessor, getBiquadFilter } from 'realtime-bpm-analyzer';

And define an audio context like so
const audioContext = new AudioContext();

Also the track appears to be missing, how do you get this from the DOM / html page defined? These things might be really obvious to some people but not to others :)

An in-range update of mocha is breaking the build 🚨

☝️ Greenkeeper’s updated Terms of Service will come into effect on April 6th, 2018.

Version 5.0.5 of mocha was just published.

Branch Build failing 🚨
Dependency mocha
Current Version 5.0.4
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

mocha is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push The Travis CI build failed Details

Release Notes v5.0.5

5.0.5 / 2018-03-22

Welcome @outsideris to the team!

🐛 Fixes

📖 Documentation

🔩 Other

Commits

The new version differs by 18 commits.

  • c11e1e2 Release v5.0.5
  • b5a556e add changes for v5.0.5 [ci skip]
  • 424ef84 increase default timeout for browser unit tests
  • 19104e3 fix debug msg in Runnable#slow; closes #2952
  • f4275b6 extract checking AMD bundle as own test
  • 19b764d Addressed feedback
  • 2c19503 Fixed linting
  • ab84f02 chore(docs): rewording pending tests
  • 6383916 fix to bail option works properly with hooks (#3278)
  • 0060884 keep hierarchy for skipped suites w/o a callback
  • 39df783 docs: Fix typo in an emoji
  • 27af2cf fix(changelog): update links to some PRs
  • d76f490 chore(ci): Remove PHANTOMJS_CDNURL, nit
  • 86af6bb fix my carelessness in e19e879
  • e19e879 ensure lib/mocha.js is not ignored by ESLint

There are 18 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Audio offset in offline analysis

Thanks for the great library. I am wondering if it is possible to add time offsets to each BPM candidate? A use case would be in rhythm games, the chart designer often wants to align notes with beats, so having both start offset of the first bar and BPM will make the alignment much easier and more accurate.

An in-range update of nyc is breaking the build 🚨

Version 11.7.1 of nyc was just published.

Branch Build failing 🚨
Dependency nyc
Current Version 11.7.0
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

nyc is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push The Travis CI build failed Details

Commits

The new version differs by 2 commits.

  • 5e40c7c chore(release): 11.7.1
  • 5c0adb5 chore: explicit upgrade of istanbul-reports (#816)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Failed to execute 'createScriptProcessor' on 'BaseAudioContext': buffer size (13307982) must be 0 or a power of two between 256 and 16384.

Hi !
I am trying to get a real time bpm for the uploaded file in my React application.
However, the Error below is what I get.

EDIT : I get the reason of the error, but the file is valid, I can slice it into different chunks and play them, so I can say that the file is not spoiled. I don't understand how I can convert the existing arrayBuffer to make createScriptProcessor like it

Error :
Main.js:73 Uncaught (in promise) DOMException: Failed to execute 'createScriptProcessor' on 'BaseAudioContext': buffer size (13307982) must be 0 or a power of two between 256 and 16384.
    at Socket.<anonymous> (http://localhost:3000/static/js/bundle.js:1048:50)
    at Socket.Emitter.emit (http://localhost:3000/static/js/bundle.js:13328:20)
    at Socket.emitEvent (http://localhost:3000/static/js/bundle.js:93155:16)
    at Socket.onevent (http://localhost:3000/static/js/bundle.js:93140:12)
    at Socket.onpacket (http://localhost:3000/static/js/bundle.js:93096:14)
    at Manager.Emitter.emit (http://localhost:3000/static/js/bundle.js:13328:20)
    at Manager.ondecoded (http://localhost:3000/static/js/bundle.js:92610:10)
    at Decoder.Emitter.emit (http://localhost:3000/static/js/bundle.js:13328:20)
    at Decoder.add (http://localhost:3000/static/js/bundle.js:93733:15)
    at Manager.ondata (http://localhost:3000/static/js/bundle.js:92600:18)
Version :
"react": "^17.0.2",
"react-dom": "^17.0.2",
"react-scripts": "5.0.0",
"realtime-bpm-analyzer": "^2.1.6",
Code :
const audio = document.createElement("audio");
audio.src = window.URL.createObjectURL(uploadedFile);
audio.volume = 0.5;
audio.play();
audio.addEventListener("timeupdate", (e) => setCurrentAudioTime(e.path?.[0]?.currentTime || e.explicitOriginalTarget.currentTime));
audio.addEventListener("ended", () => setIsMusicEnded(true));
updateAudioPlayer(audio);
//
const audioContext = new AudioContext();
const source = audioContext.createMediaElementSource(audio);
const scriptProcessorNode = audioContext.createScriptProcessor(uploadedFile.size, 1, 1);
scriptProcessorNode.connect(audioContext.destination);
source.connect(scriptProcessorNode);
source.connect(audioContext.destination);
const onAudioProcess = new RealTimeBPMAnalyzer({
  scriptNode: {
    bufferSize: uploadedFile.size,
  },
  pushTime: 2000,
  pushCallback: (err, bpm) => {
    if (err) return console.log(err);
    console.log("bpm", bpm);
  },
});
scriptProcessorNode.onaudioprocess = (e) => {
  onAudioProcess.analyze(e);
};

In the upper code, uploadedFile is a blob. I also tried converting it to arrayBuffer to its size property (I know the number would be the same, but I still tried), but I have same error.

Remove the need to expose the processor file

I have seen other web audio projects using the following technique to register audio worklets without the need to expose a separate file.

import {processor} from '../src/processor';

const blob = new Blob([processor], {type: 'application/javascript'});
const objectURL = URL.createObjectURL(blob);
await audioContext.audioWorklet.addModule(objectURL);
const audioWorkletNode = new AudioWorkletNode(audioContext, processorName);

Have you tried this? If it works it would make deployment and upgrade much simpler and avoid issues with caching, CORS, and HTTPS.

An example of this technique can be seen here:

https://github.com/esonderegger/web-audio-peak-meter/blob/759c119d832b73d9e81e9bca91baa7d6264dc62e/src/index.ts#L47-L60

An in-range update of eslint is breaking the build 🚨

The dependency eslint was updated from 6.0.0 to 6.0.1.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

eslint is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • continuous-integration/travis-ci/push: The Travis CI build failed (Details).

Release Notes for v6.0.1
  • b5bde06 Fix: --rulesdir option didn't work (fixes #11888) (#11890) (Toru Nagashima)
  • 13f0418 Fix: improve error message on --print-config (fixes #11874) (#11885) (Toru Nagashima)
  • 056c2aa Fix: improve diagnostics for shareable-config-missing errors (#11880) (Teddy Katz)
  • 566b7aa Docs: Update no-confusing-arrow with the new default option (#11886) (Yuping Zuo)
  • d07f3fa Fix: CLIEngine#getRules() contains plugin rules (fixes #11871) (#11872) (Toru Nagashima)
  • 21f4a80 Docs: Fix inconsistent linking in migration guide (#11881) (Teddy Katz)
  • f3a0774 Docs: Fix typo in 6.0.0 migration guide (#11870) (Kevin Partington)
Commits

The new version differs by 9 commits.

  • 54ee60b 6.0.1
  • 3975d50 Build: changelog update for 6.0.1
  • b5bde06 Fix: --rulesdir option didn't work (fixes #11888) (#11890)
  • 13f0418 Fix: improve error message on --print-config (fixes #11874) (#11885)
  • 056c2aa Fix: improve diagnostics for shareable-config-missing errors (#11880)
  • 566b7aa Docs: Update no-confusing-arrow with the new default option (#11886)
  • d07f3fa Fix: CLIEngine#getRules() contains plugin rules (fixes #11871) (#11872)
  • 21f4a80 Docs: Fix inconsistent linking in migration guide (#11881)
  • f3a0774 Docs: Fix typo in 6.0.0 migration guide (#11870)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

An in-range update of coveralls is breaking the build 🚨

Version 3.0.1 of coveralls was just published.

Branch Build failing 🚨
Dependency coveralls
Current Version 3.0.0
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

coveralls is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push The Travis CI build failed Details

Release Notes Additional CI support

Maintenance:

Commits

The new version differs by 9 commits.

  • e7ae2bf version bump; logger test fix
  • 9b892bf Verbose use log level debug
  • 9cfb496 Add buildkite support (#177)
  • aa8c257 Done callback waits for unlink in testRepoTokenDetection to prevent race condition (#179)
  • 18c71c2 Fix a mistype in tests: fs.exists -> fs.existsSync (#184)
  • bd667c6 Add Semaphore support (#180)
  • 10d8b3e Update examples to include Jest (#183)
  • 720ee7c Add license (#175)
  • 83ff2cb Add mention about AppVeyor to the README (#164)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Error: Could not find enough samples for a reliable detection.

I'm trying to connect it to a live radio stream but it just keeps throwing that error. is there a way to do this with a live feed?

  getBPM = () => {
    console.log("bpm");
    let { source } = this.state;
    let audioContext = this.props.audioContext;
    const scriptProcessorNode = audioContext.createScriptProcessor(4096, 1, 1);
    scriptProcessorNode.connect(audioContext.destination);
    source.connect(scriptProcessorNode);

    const onAudioProcess = new RealTimeBPMAnalyzer({
      debug: true,
      scriptNode: {
        bufferSize: 4096,
        numberOfInputChannels: 1,
        numberOfOutputChannels: 1,
      },
      pushTime: 1000,
      pushCallback: (err, bpm) => {
        console.log("65", err);
        console.log("bpm", bpm);
      },
    });
    scriptProcessorNode.onaudioprocess = (e) => {
      onAudioProcess.analyze(e);
    };
  };

Ableton Link

It would be fantastic if an "Ableton Link" interface was implemented

Issues after upgrading to v4

After upgrading to v4, the data returned during continuous analysis is inconsistent with the documentation.

This does not work:

realtimeAnalyzerNode.port.onmessage = (event) => {
  if (event.data.message === 'BPM_STABLE') {
    console.log('BPM_STABLE', event.data.result);
  }
};

Dumping the data to the console I see this:

Screenshot 2024-02-20 024845

So I figured that to get the stable BPM it should be:

realtimeAnalyzerNode.port.onmessage = (event) => {
  if (event.data.message === 'BPM_STABLE') {
    console.log('BPM_STABLE', event.data.data.bpm[0].tempo);
  }
};

Also I noticed that you no longer provide a high pass filter. I assume this is no longer required for analysis? I assume not, as the analyzer still works fine for me.

Feature

@raizpvp commented 8 days ago
Hey guys great work,
can you please add a function which is executed when the bass reached the peak, like a live strobe.
many thanks

From PR: #37

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper App’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.