Coder Social home page Coder Social logo

jordipons / musicnn Goto Github PK

View Code? Open in Web Editor NEW
567.0 567.0 86.0 244.33 MB

Pronounced as "musician", musicnn is a set of pre-trained deep convolutional neural networks for music audio tagging.

License: ISC License

Jupyter Notebook 95.77% Python 4.23%

musicnn's People

Contributors

carlthome avatar jordipons avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

musicnn's Issues

Beat detection

Hi,
Thanks for this wonderful work.
I wonder if there is a way to use this code for detecting the beats location in the temporal input (audio file).

Thanks again,
Koby

Extractor value error from call as per README

Hey Jordi

When I run extractor as per the readme:
taggram, tags = extractor(<path>, model='MTT_musicnn')
I get a value error:
File "<stdin>", line 1, in <module> ValueError: too many values to unpack (expected 2)

works fine if run it as per the tagging_example.ipynb:
taggram, tags = extractor(<path>, model='MTT_musicnn', extract_features=False)

Error:

>>> taggram, tags = extractor('Bounce08.wav', model='MTT_musicnn')
Computing spectrogram (w/ librosa) and tags (w/ tensorflow).. done!
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: too many values to unpack (expected 2)
>>> taggram, tags = extractor('Bounce08.wav', model='MTT_musicnn', extract_features=False)
Computing spectrogram (w/ librosa) and tags (w/ tensorflow).. done!
>>>

Edit: installed with PyPi

Running musicnn_test.ipynb throws exception

When taggram, tags = extractor(file_name, model='MTT_musicnn', extract_features=False) is run, this warning is repeated multiple times:

WARNING: Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x7f9a3e22e240>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x7f9a3e22e240>>: AssertionError: Bad argument number for Name: 3, expecting 4

and am given error:

RuntimeError                              Traceback (most recent call last)
~/.local/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
    145     try:
--> 146         with sf.SoundFile(path) as sf_desc:
    147             sr_native = sf_desc.samplerate

~/.local/lib/python3.6/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
    628                                          format, subtype, endian)
--> 629         self._file = self._open(file, mode_int, closefd)
    630         if set(mode).issuperset('r+') and self.seekable():

~/.local/lib/python3.6/site-packages/soundfile.py in _open(self, file, mode_int, closefd)
   1183         _error_check(_snd.sf_error(file_ptr),
-> 1184                      "Error opening {0!r}: ".format(self.name))
   1185         if mode_int == _snd.SFM_WRITE:

~/.local/lib/python3.6/site-packages/soundfile.py in _error_check(err, prefix)
   1356         err_str = _snd.sf_error_number(err)
-> 1357         raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
   1358 

RuntimeError: Error opening './audio/joram-moments_of_clarity-08-solipsism-59-88.mp3': File contains data in an unknown format.

During handling of the above exception, another exception occurred:

NoBackendError                            Traceback (most recent call last)
<ipython-input-2-f690f5bcff5c> in <module>
      1 from musicnn.extractor import extractor
----> 2 taggram, tags, features = extractor(file_name, model='MTT_musicnn', extract_features=True)

~/Documents/musicnn/musicnn/extractor.py in extractor(file_name, model, input_length, input_overlap, extract_features)
    156     # batching data
    157     print('Computing spectrogram (w/ librosa) and tags (w/ tensorflow)..', end =" ")
--> 158     batch, spectrogram = batch_data(file_name, n_frames, overlap)
    159 
    160     # tensorflow: extract features and tags

~/Documents/musicnn/musicnn/extractor.py in batch_data(audio_file, n_frames, overlap)
     39 
     40     # compute the log-mel spectrogram with librosa
---> 41     audio, sr = librosa.load(audio_file, sr=config.SR)
     42     audio_rep = librosa.feature.melspectrogram(y=audio, 
     43                                                sr=sr,

~/.local/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
    161         if isinstance(path, (str, pathlib.PurePath)):
    162             warnings.warn("PySoundFile failed. Trying audioread instead.")
--> 163             y, sr_native = __audioread_load(path, offset, duration, dtype)
    164         else:
    165             raise (exc)

~/.local/lib/python3.6/site-packages/librosa/core/audio.py in __audioread_load(path, offset, duration, dtype)
    185 
    186     y = []
--> 187     with audioread.audio_open(path) as input_file:
    188         sr_native = input_file.samplerate
    189         n_channels = input_file.channels

~/.local/lib/python3.6/site-packages/audioread/__init__.py in audio_open(path, backends)
    114 
    115     # All backends failed!
--> 116     raise NoBackendError()

NoBackendError: 

Based on the FAQ, there seems to be a problem with librosa soundread and audioread

Running the vgg_example yields the same warning, but outputs the correct taggram

However, when I switched musicnn_example to opening filename='./audio/TRWJAZW128F42760DD_test.mp3' from './audio/joram-moments_of_clarity-08-solipsism-59-88.mp3', the warning is still raised, but a taggram does show. Perhaps this is a problem with my version of librosa, because it seems that 'joram...mp3' cannot be opened. Openning 'TRW...mp3' yields me the warning /home/kevin/.local/lib/python3.6/site-packages/librosa/core/audio.py:162: UserWarning: PySoundFile failed. Trying audioread instead. warnings.warn("PySoundFile failed. Trying audioread instead.") which is consistent with the FAQ.

However, the Autgograph warning seems to be a different problem altogether.

Can't I use `musicnn` on Windows10?

I have python 3.8.5.
At the time of installation musicnn, it prompts me the errors about packages.

stable-diffusion\env λ .\python.exe -m pip install musicnn
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting musicnn
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/1b/6f/38229e7d99c438e11114bbfa39c8c39185458c398011d0b6d7d7c7401617/musicnn-0.1.0-py3-none-any.whl (29.3 MB)
Requirement already satisfied: tensorflow>=1.14 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from musicnn) (2.10.0)
Collecting librosa>=0.7.0
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e4/1c/23ef2fd02913d65d43dc7516fc829af709314a66c6f0bdc2e361fdcecc2d/librosa-0.9.2-py3-none-any.whl (214 kB)
Requirement already satisfied: numba>=0.45.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from librosa>=0.7.0->musicnn) (0.56.2)
Requirement already satisfied: scipy>=1.2.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from librosa>=0.7.0->musicnn) (1.9.1)
Requirement already satisfied: decorator>=4.0.10 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from librosa>=0.7.0->musicnn) (5.1.1)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/5b/da/bd63187b2ca1b97c04c270df90c934a97cbe512c8238ab65c89c1b043ae2/librosa-0.9.1-py3-none-any.whl (213 kB)
     |████████████████████████████████| 213 kB 6.4 MB/s
Requirement already satisfied: packaging>=20.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from librosa>=0.7.0->musicnn) (21.3)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/5a/90/61c239b4bf9aee2ac16dcf1a1f05d5f112ac0ad301f06feacb42fa6834aa/librosa-0.9.0-py3-none-any.whl (211 kB)
     |████████████████████████████████| 211 kB 6.8 MB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/54/19/a0e2bdc94bc0d1555e4f9bc4099a0751da83fa6e1e6157ec005564f8a98a/librosa-0.8.1-py3-none-any.whl (203 kB)
     |████████████████████████████████| 203 kB 6.8 MB/s
Collecting audioread>=2.0.0
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/5d/cb/82a002441902dccbe427406785db07af10182245ee639ea9f4d92907c923/audioread-3.0.0.tar.gz (377 kB)
Collecting joblib>=0.14
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/91/d4/3b4c8e5a30604df4c7518c562d4bf0502f2fa29221459226e140cf846512/joblib-1.2.0-py3-none-any.whl (297 kB)
     |████████████████████████████████| 297 kB 6.4 MB/s
Requirement already satisfied: setuptools<60 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from numba>=0.45.1->librosa>=0.7.0->musicnn) (59.8.0)
Requirement already satisfied: importlib-metadata in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from numba>=0.45.1->librosa>=0.7.0->musicnn) (4.12.0)
Collecting numba>=0.43.0
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/39/cc/bfb25cf17904eef3aba1e091758f1959c34f1ff558dd09a64feaeb3bd001/numba-0.56.0-cp38-cp38-win_amd64.whl (2.5 MB)
     |████████████████████████████████| 2.5 MB 6.8 MB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a4/46/a4759a5bd7bbd09fa6b70dfca0ad55ee0fae72d48ca0febdc53b252cbfcf/numba-0.55.2-cp38-cp38-win_amd64.whl (2.4 MB)
     |████████████████████████████████| 2.4 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/56/96/10b81c9fc38360b4a9867da218c607d54ce3a2d3973c0a179f91bdd030f4/numba-0.55.1-cp38-cp38-win_amd64.whl (2.4 MB)
     |████████████████████████████████| 2.4 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/7b/bf/374490d2da3fb4ab510e70517447d8eff72cc9052f3b361d88d52037dbcd/numba-0.55.0-cp38-cp38-win_amd64.whl (2.4 MB)
     |████████████████████████████████| 2.4 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/7f/a8/b91e3e7a60fc9bcb9f1133475e44e375f2e14b73e6b96f7e6a3e6f2303ac/numba-0.54.1-cp38-cp38-win_amd64.whl (2.3 MB)
     |████████████████████████████████| 2.3 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/6e/1e/4fbf2390ffa7256abf4f0e0daf8c430f66ad263386aa1e175dc22573e4cf/numba-0.54.0-cp38-cp38-win_amd64.whl (2.3 MB)
     |████████████████████████████████| 2.3 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9c/ce/d0a7573290bfe9a394d5ef3798d7f63978eeb940b6ceac75e792d660b9a3/numba-0.53.1-cp38-cp38-win_amd64.whl (2.3 MB)
     |████████████████████████████████| 2.3 MB ...
Collecting llvmlite<0.37,>=0.36.0rc1
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/6e/01/e30f3a93e4198f58b5bbcbdcdfce0b56956d2b8d99988f0db58fab23d1ed/llvmlite-0.36.0-cp38-cp38-win_amd64.whl (16.0 MB)
     |████████████████████████████████| 16.0 MB ...
Collecting numpy<1.17,>=1.14.5
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/b7/6f/24647f014eef9b67a24adfcbcd4f4928349b4a0f8393b3d7fe648d4d2de3/numpy-1.16.6.zip (5.1 MB)
     |████████████████████████████████| 5.1 MB ...
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from packaging>=20.0->librosa>=0.7.0->musicnn) (3.0.9)
Collecting pooch>=1.0
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/8d/64/8e1bfeda3ba0f267b2d9a918e8ca51db8652d0e1a3412a5b3dbce85d90b6/pooch-1.6.0-py3-none-any.whl (56 kB)
Requirement already satisfied: requests>=2.19.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from pooch>=1.0->librosa>=0.7.0->musicnn) (2.28.1)
Collecting appdirs>=1.3.0
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/3b/00/2344469e2084fb287c2e0b57b72910309874c3245463acd6cf5e3db69324/appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from requests>=2.19.0->pooch>=1.0->librosa>=0.7.0->musicnn) (1.26.11)
Requirement already satisfied: charset-normalizer<3,>=2 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from requests>=2.19.0->pooch>=1.0->librosa>=0.7.0->musicnn) (2.0.4)Requirement already satisfied: idna<4,>=2.5 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from requests>=2.19.0->pooch>=1.0->librosa>=0.7.0->musicnn) (3.3)
Requirement already satisfied: certifi>=2017.4.17 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from requests>=2.19.0->pooch>=1.0->librosa>=0.7.0->musicnn) (2022.6.15.1)Collecting resampy>=0.2.2
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/f2/d3/5209fd2132452f199b1ddf0d084f9fd5f5f910840e3b282f005b48a503e1/resampy-0.4.2-py3-none-any.whl (3.1 MB)
     |████████████████████████████████| 3.1 MB 6.4 MB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/96/f6/819514dd8be3681fdd1dc81a94f5e1d51019c18e9e7b351c8e097a86e77f/resampy-0.4.1-py3-none-any.whl (3.1 MB)
     |████████████████████████████████| 3.1 MB ...
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/52/85/488fad73a9db1ae096f59c6927ba07e0d74f2ef30716b3de320431cd4929/resampy-0.4.0-py3-none-any.whl (3.1 MB)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/51/7e/7aec4c54c4b11ac8333dc01d0e910e692be7da944769e37f9e248537a3f1/resampy-0.3.1-py3-none-any.whl (3.1 MB)
     |████████████████████████████████| 3.1 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/59/00/2aba99630a823efa086b65f04b7025264b0dc924cae664dd897e6ba1f3d6/resampy-0.3.0-py3-none-any.whl (3.1 MB)
     |████████████████████████████████| 3.1 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/79/75/e22272b9c2185fc8f3af6ce37229708b45e8b855fd4bc38b4d6b040fff65/resampy-0.2.2.tar.gz (323 kB)
     |████████████████████████████████| 323 kB ...
Requirement already satisfied: six>=1.3 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from resampy>=0.2.2->librosa>=0.7.0->musicnn) (1.16.0)
Collecting scikit-learn>=0.19.1
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/cb/0b/e085436fce6daf49786bf0e1107ade7dcd22eb6110abb44b6eb6f29f9270/scikit_learn-1.1.2-cp38-cp38-win_amd64.whl (7.3 MB)
     |████████████████████████████████| 7.3 MB ...
Collecting scikit-learn!=0.19.0,>=0.14.0
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/40/90/73b54af0f59f813753b4f8305439476a77d73df2d1807a6f26d6da0d2cbc/scikit_learn-1.1.1-cp38-cp38-win_amd64.whl (7.3 MB)
     |████████████████████████████████| 7.3 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/f5/bf/a5e547e7277fe6fa1dd69656f353570f36ea514c7e4ee8f249566424b9f3/scikit_learn-1.1.0-cp38-cp38-win_amd64.whl (7.3 MB)
     |████████████████████████████████| 7.3 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/50/f5/2bfd87943a29870bdbe00346c9f3b0545dd7a188201297a33189f866f04e/scikit_learn-1.0.2-cp38-cp38-win_amd64.whl (7.2 MB)
     |████████████████████████████████| 7.2 MB 6.8 MB/s
Collecting scipy>=1.0.0
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/49/ff/adcafbc8d9b14cb06999d876c10eee20636714cbdb93e9f30f68dee5d8ee/scipy-1.9.0-cp38-cp38-win_amd64.whl (38.6 MB)
     |████████████████████████████████| 38.6 MB 6.8 MB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/8d/3e/e6f6fa6458e03ecd456ae6178529d4bd610a7c4999189f34d0668e4e69a6/scipy-1.8.1-cp38-cp38-win_amd64.whl (36.9 MB)
     |████████████████████████████████| 36.9 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/56/a3/591dbf477c35f173279afa7b9ba8e13d9c7c3d001e09aebbf6100aae33a8/scipy-1.8.0-cp38-cp38-win_amd64.whl (36.9 MB)
     |████████████████████████████████| 36.9 MB ...
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/b9/23/8c13a8973f5f695577f396fc2a6a920d00e91727bff173c48d03d1732a78/scipy-1.7.3-cp38-cp38-win_amd64.whl (34.2 MB)
     |████████████████████████████████| 34.2 MB 6.4 MB/s
Collecting soundfile>=0.10.2
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b8/de/24e4035f06540ebb4e9993238ede787063875b003e79c537511d32a74d29/SoundFile-0.10.3.post1-py2.py3.cp26.cp27.cp32.cp33.cp34.cp35.cp36.pp27.pp32.pp33-none-win_amd64.whl (689 kB)
Requirement already satisfied: cffi>=1.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from soundfile>=0.10.2->librosa>=0.7.0->musicnn) (1.15.1)
Requirement already satisfied: pycparser in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa>=0.7.0->musicnn) (2.21)
Requirement already satisfied: google-pasta>=0.1.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (0.2.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (1.48.1)
Requirement already satisfied: absl-py>=1.0.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (1.2.0)
Requirement already satisfied: typing-extensions>=3.6.6 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (4.3.0)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (0.4.0)
Requirement already satisfied: flatbuffers>=2.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (2.0.7)
Requirement already satisfied: opt-einsum>=2.3.2 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (3.3.0)
Requirement already satisfied: astunparse>=1.6.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (1.6.3)
Requirement already satisfied: keras-preprocessing>=1.1.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (1.1.2)
Requirement already satisfied: protobuf<3.20,>=3.9.2 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (3.19.4)
Requirement already satisfied: wrapt>=1.11.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (1.14.1)
Requirement already satisfied: h5py>=2.9.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (3.7.0)
Requirement already satisfied: keras<2.11,>=2.10.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (2.10.0)
Requirement already satisfied: termcolor>=1.1.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (2.0.1)
Requirement already satisfied: libclang>=13.0.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (14.0.6)
Collecting tensorflow>=1.14
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/0d/f0/1e509035d97a093a7f1f5da3df6aad5cddf798ce4f9cab9056b330207c82/tensorflow-2.9.2-cp38-cp38-win_amd64.whl (444.1 MB)
     |████████████████████████████████| 444.1 MB 65 kB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/23/84/e9aa1e62775d36202df20b19e442c2779499b43c9738daf222d7a8abd9fc/tensorflow-2.9.1-cp38-cp38-win_amd64.whl (444.1 MB)
     |████████████████████████████████| 444.1 MB 85 kB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/2b/80/e162cc9695573c6f3e075c062785023da9b75599447a31f8c71bc2953290/tensorflow-2.9.0-cp38-cp38-win_amd64.whl (444.1 MB)
     |████████████████████████████████| 444.1 MB 42 kB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/05/03/e043f6e02bd03d464fe842973c7fbfa9c5bb8b30da5d34cc2084d1ce083f/tensorflow-2.8.3-cp38-cp38-win_amd64.whl (438.3 MB)
     |████████████████████████████████| 438.3 MB 1.7 kB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/21/02/7bfeae6dc5e31ee7af2d2d70649551c53257981966d6e3a54ed3cba1bbf8/tensorflow-2.8.2-cp38-cp38-win_amd64.whl (438.3 MB)
     |████████████████████████████████| 438.3 MB 28 kB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c9/ec/b92d979f83097c3a09cc9608e8df9b45a903115d43eeaef078c3b938a672/tensorflow-2.8.1-cp38-cp38-win_amd64.whl (438.3 MB)
     |████████████████████████████████| 438.3 MB 54 kB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/17/5b/dac6e0607e4186b9e157597cd96d945aa769c60ef9f9f1b7ddc174f39332/tensorflow-2.8.0-cp38-cp38-win_amd64.whl (438.0 MB)
     |████████████████████████████████| 438.0 MB 97 kB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/e7/77/5976d96c66a3d6e20d6a4344ea7384248ff8443b67894eb6b780e97ddb0f/tensorflow-2.7.4-cp38-cp38-win_amd64.whl (436.7 MB)
     |████████████████████████████████| 436.7 MB 104 kB/s
Requirement already satisfied: tensorboard~=2.6 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (2.10.0)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.21.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (0.27.0)
Requirement already satisfied: wheel<1.0,>=0.32.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorflow>=1.14->musicnn) (0.37.1)
Collecting keras<2.8,>=2.7.0rc0
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/6b/8b/065f94ba03282fa41b2d76942b87a180a9913312c4611ea7d6508fbbc114/keras-2.7.0-py2.py3-none-any.whl (1.3 MB)
     |████████████████████████████████| 1.3 MB 6.4 MB/s
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorboard~=2.6->tensorflow>=1.14->musicnn) (0.4.6)
Requirement already satisfied: markdown>=2.6.8 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorboard~=2.6->tensorflow>=1.14->musicnn) (3.4.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorboard~=2.6->tensorflow>=1.14->musicnn) (1.8.1)     
Requirement already satisfied: google-auth<3,>=1.6.3 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorboard~=2.6->tensorflow>=1.14->musicnn) (2.11.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorboard~=2.6->tensorflow>=1.14->musicnn) (0.6.1)
Requirement already satisfied: werkzeug>=1.0.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from tensorboard~=2.6->tensorflow>=1.14->musicnn) (2.2.2)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow>=1.14->musicnn) (5.2.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow>=1.14->musicnn) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow>=1.14->musicnn) (4.9)Requirement already satisfied: requests-oauthlib>=0.7.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow>=1.14->musicnn) (1.3.1)
Requirement already satisfied: zipp>=0.5 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from importlib-metadata->numba>=0.45.1->librosa>=0.7.0->musicnn) (3.8.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow>=1.14->musicnn) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow>=1.14->musicnn) (3.2.1)
Collecting tensorflow-estimator<2.8,~=2.7.0rc0
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/db/de/3a71ad41b87f9dd424e3aec3b0794a60f169fa7e9a9a1e3dd44290b86dd6/tensorflow_estimator-2.7.0-py2.py3-none-any.whl (463 kB)
     |████████████████████████████████| 463 kB ...
Collecting threadpoolctl>=2.0.0
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/61/cf/6e354304bcb9c6413c4e02a747b600061c21d38ba51e7e544ac7bc66aecc/threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Requirement already satisfied: MarkupSafe>=2.1.1 in d:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages (from werkzeug>=1.0.1->tensorboard~=2.6->tensorflow>=1.14->musicnn) (2.1.1)Building wheels for collected packages: audioread, numpy, resampy
  Building wheel for audioread (setup.py) ... done
  Created wheel for audioread: filename=audioread-3.0.0-py3-none-any.whl size=23706 sha256=84986037dad8f4f2eac9abf197d63eeb558521031259ad97736e51232f573d5b
  Stored in directory: c:\users\scillidan\appdata\local\pip\cache\wheels\e2\c3\9c\f19ae5a03f8862d9f0776b0c0570f1fdd60a119d90954e3f39
  Building wheel for numpy (setup.py) ... done
  Created wheel for numpy: filename=numpy-1.16.6-cp38-cp38-win_amd64.whl size=3946142 sha256=bcc845f029c836a5e8fca77bd06c15078245fd9af6eb524788792b96ec95490b
  Stored in directory: c:\users\scillidan\appdata\local\pip\cache\wheels\56\40\08\b4589e620c27337e8cb7a4c70a960ce67252310edf064f3f3d
  Building wheel for resampy (setup.py) ... done
  Created wheel for resampy: filename=resampy-0.2.2-py3-none-any.whl size=320732 sha256=35dd4bc318504c9a23a02ab8ce92b53dcba3497d5ee36dca3464291825788b65
  Stored in directory: c:\users\scillidan\appdata\local\pip\cache\wheels\e2\d8\d2\ebe9bdee286d235792e9d9c694bf38bd355562be46b9c04283
Successfully built audioread numpy resampy
Installing collected packages: numpy, llvmlite, threadpoolctl, scipy, numba, joblib, appdirs, tensorflow-estimator, soundfile, scikit-learn, resampy, pooch, keras, audioread, tensorflow, librosa, musicnn
  Attempting uninstall: numpy
    Found existing installation: numpy 1.23.3
    Uninstalling numpy-1.23.3:
      Successfully uninstalled numpy-1.23.3
  Attempting uninstall: llvmlite
    Found existing installation: llvmlite 0.39.1
    Uninstalling llvmlite-0.39.1:
      Successfully uninstalled llvmlite-0.39.1
  Attempting uninstall: scipy
    Found existing installation: scipy 1.9.1
    Uninstalling scipy-1.9.1:
      Successfully uninstalled scipy-1.9.1
  Attempting uninstall: numba
    Found existing installation: numba 0.56.2
    Uninstalling numba-0.56.2:
      Successfully uninstalled numba-0.56.2
  Attempting uninstall: tensorflow-estimator
    Found existing installation: tensorflow-estimator 2.10.0
    Uninstalling tensorflow-estimator-2.10.0:
      Successfully uninstalled tensorflow-estimator-2.10.0
  Attempting uninstall: keras
    Found existing installation: keras 2.10.0
    Uninstalling keras-2.10.0:
      Successfully uninstalled keras-2.10.0
  Attempting uninstall: tensorflow
    Found existing installation: tensorflow 2.10.0
    Uninstalling tensorflow-2.10.0:
      Successfully uninstalled tensorflow-2.10.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
transformers 4.19.2 requires numpy>=1.17, but you have numpy 1.16.6 which is incompatible.
torchmetrics 0.6.0 requires numpy>=1.17.2, but you have numpy 1.16.6 which is incompatible.
tifffile 2022.8.12 requires numpy>=1.19.2, but you have numpy 1.16.6 which is incompatible.
scikit-image 0.19.3 requires numpy>=1.17.0, but you have numpy 1.16.6 which is incompatible.
pywavelets 1.3.0 requires numpy>=1.17.3, but you have numpy 1.16.6 which is incompatible.
pytorch-lightning 1.4.2 requires numpy>=1.17.2, but you have numpy 1.16.6 which is incompatible.
pandas 1.4.4 requires numpy>=1.18.5; platform_machine != "aarch64" and platform_machine != "arm64" and python_version < "3.10", but you have numpy 1.16.6 which is incompatible.
opencv-python 4.1.2.30 requires numpy>=1.17.3, but you have numpy 1.16.6 which is incompatible.
opencv-python-headless 4.6.0.66 requires numpy>=1.17.3; python_version >= "3.8", but you have numpy 1.16.6 which is incompatible.
matplotlib 3.5.3 requires numpy>=1.17, but you have numpy 1.16.6 which is incompatible.
basicsr 1.4.2 requires numpy>=1.17, but you have numpy 1.16.6 which is incompatible.
Successfully installed appdirs-1.4.4 audioread-3.0.0 joblib-1.2.0 keras-2.7.0 librosa-0.8.1 llvmlite-0.36.0 musicnn-0.1.0 numba-0.53.1 numpy-1.16.6 pooch-1.6.0 resampy-0.2.2 scikit-learn-1.0.2 scipy-1.7.3 soundfile-0.10.3.post1 tensorflow-2.7.4 tensorflow-estimator-2.7.0 threadpoolctl-3.1.0

dancing types, e.g. Walz, Swing,

I'd be curious if you can predict the type(s) of dance(s) like Walz, Chacha, Samba, Salsa, West Coast Swing etc.

While there are big music database information avialiable, e.g. whether it's techno, pop, classic, jazz, etc., there are barely information about the type of dance.

I have access to some hundreds of meta information about songs and their dance to start with: https://www.welcher-tanz.de/interpreten/

But I don't know much about python or aggregating waveforms for training. If you want to extend the tags for these information, please feel free to contact me.

Input from memory for tagger

It could be nice to have the option to use a file-like as input audio file for the tagger - or more broadly, a file from memory.
I see this particularly applicable in a case where audio files have to be disposed after auto tagging, i.e. the service lives in the cloud with limited resources. Python tempfile.TemporaryFile() would be the best option to use given such scenario.

Librosa docs suggest to use directly soundfile library and then resample to achieve the same effect as librosa.load().

Inference with SavedModel format

Hi and many thanks for releasing this project!

I would like to convert the models in a SavedModel format and perform inference + feature extraction from there.
I was able to convert one of the models to a saved_model.pb format, although I am not sure this is the right way to do it

import os
import tensorflow as tf
  
def convert():

    MODEL_DIR = 'MTT_musicnn/'
    trained_checkpoint_prefix = MODEL_DIR
    export_dir = os.path.join('export_dir', MODEL_DIR)

    graph = tf.Graph()
    with tf.compat.v1.Session(graph=graph) as sess:
        # Restore from checkpoint
        loader = tf.compat.v1.train.import_meta_graph(trained_checkpoint_prefix + '.meta')
        loader.restore(sess, trained_checkpoint_prefix)

        # Export checkpoint to SavedModel
        builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir)
        builder.add_meta_graph_and_variables(sess,
                                            [tf.saved_model.TRAINING, tf.saved_model.SERVING],
                                            strip_default_attrs=True)
        builder.save()

This way I am able to load it like:

model = keras.models.load_model(model_path)

and it would look like this:

<tensorflow.python.training.tracking.tracking.AutoTrackable object at 0x7f0f6c231668>

This AutoTrackable object does not seem to have a predict method. So it fails when I provide 1 batch of data.
Is there a clever way to make this work?
Thanks a lot!

[QUESTION] About models

Hello Jordi,
I was looking at the recent TensorflowPredictMusiCNN, that's an amazing work, congrats!
In the essentia documentation there are several models linked here. Is the same musicnn architecture and models when installing it via pip or it has been updated in the essentia repo?

Thank you!

Instruction how to install the package to solve dependency issues

  1. Create a virtual environment with Python 3.6.13 version using conda package manager:
    conda create --name musicnn_env python=3.6.13

  2. Activate your new virtual environment:

conda activate musicnn_env

musicnn is not to be found anywhere in conda repos including conda-forge, the easiest is to use pip in your activated conda environment:

  1. pip install -r requirements.txt

If you want to use musicnn in jupyter lab, make it available from Kernel menu in jupyter like that:

python -m ipykernel install --user --name musicnn_env

Put this into requirements.txt file:

audioread==3.0.1
librosa==0.8.1
musicnn==0.1.0
numpy==1.16.6
pandas==1.1.5
scikit-learn==0.24.2
scipy==1.5.4
soundfile==0.12.1
tensorflow==2.3.4
resampy==0.2.2
ipython==7.16.3

Update the project to current numpy and librosa

Hi Jordi,
We are trying to run musicnn but are having issues with both libraries numpy and librosa versions. The problem is numpy==1.14.5 which is quite deprecated, and it is needed to run librosa==0.7.0.
Any suggestion on how to fix it?

musicnn-training model cannot used in this repo

when i trained a model in musicnn-traing repo use default config ( model 11 ) , i can not used in this repo,

error is :
OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key conv2d_5/bias not found in checkpoin

ValueError: Model not implemented! (Musicnn_big)

I'm able to pip install and run the standard musicnn without issue, but when attempting to build and then run the mtt_musicnn_big it gives me the following error:

ValueError: Model not implemented!

I'm running this inside a venv with python 3.8. Any advice on how to fix this? I am following the build instructions from the readme. In case this is helpful, it spits out a number of warnings at the end of the build process:

### Warning:  Using unoptimized lapack ###
### Warning:  Using unoptimized lapack ###
no previously-included directories found matching 'doc/build'
no previously-included directories found matching 'doc/source/generated'
no previously-included directories found matching 'benchmarks/env'
no previously-included directories found matching 'benchmarks/results'
no previously-included directories found matching 'benchmarks/html'
no previously-included directories found matching 'benchmarks/numpy'
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.swp' found anywhere in distribution
warning: no previously-included files matching '*.bak' found anywhere in distribution
warning: no previously-included files matching '*~' found anywhere in distribution

UnboundLocalError: local variable 'batch' referenced before assignment

I have been experiencing this error quite often recently and I do not understand what is its main cause.
I was looking at this answer but I can not find anywhere else where the variable batch could be created or assigned outside batch_date.

What could be wrong?
Python version: 3.7
Traceback is

Computing spectrogram (w/ librosa) and tags (w/ tensorflow).. Traceback (most recent call last):
audio_tagger_1  |   File "main.py", line 95, in <module>
audio_tagger_1  |     suggested_tags = extract_tags(output_filename)
audio_tagger_1  |   File "main.py", line 15, in extract_tags
audio_tagger_1  |     return top_tags(filename, model="MTT_musicnn", topN=10)
audio_tagger_1  |   File "/usr/local/lib/python3.7/site-packages/musicnn/tagger.py", line 60, in top_tags
audio_tagger_1  |     taggram, tags = extractor(file_name, model=model, input_length=input_length, input_overlap=input_overlap, extract_features=False)
audio_tagger_1  |   File "/usr/local/lib/python3.7/site-packages/musicnn/extractor.py", line 158, in extractor
audio_tagger_1  |     batch, spectrogram = batch_data(file_name, n_frames, overlap)
audio_tagger_1  |   File "/usr/local/lib/python3.7/site-packages/musicnn/extractor.py", line 62, in batch_data
audio_tagger_1  |     return batch, audio_rep
audio_tagger_1  | UnboundLocalError: local variable 'batch' referenced before assignment

train/valid/test split for MagnaTagATune

Hi,
thanks for sharing the pretrained models!
I am currently trying to figure out which train/valid/test split you used. You linked to this repo where I can't find any information about splits. They only mention a split of 13:1:3 which I first thought would be the folders but there are 16 folders and not 17 ...
Did you process the tags as proposed there?

I found this split but there are only 15244 train samples and you mentioned that your version of the dataset contains ~19000 train samples.

I assume you downloaded the audio and csv with tags from here.

I hope you can shed some light on my confusion :) Thanks!
Best regards,
Verena

Convert model to TensorFlow Lite

I'm trying to convert the MSD_musicnn model to tflite format using TensorFlow 2.15.1 on Debian 12. I have no experience training models, just doing inference, so it's hard for me to tell what's wrong my approach. I'm basically copying parts of the extractor.py code and saving the model with tf.compat.v1.saved_model.simple_save, then trying to convert the saved model.

Here's the Python code:

import os
import sys
import tensorflow as tf
import tensorflow.saved_model
import models
import configuration as config

model = 'MSD_musicnn'
n_frames = 187

labels = config.MSD_LABELS
num_classes = len(labels)

with tf.name_scope('model'):
    x = tf.compat.v1.placeholder(tf.float32, [None, n_frames, config.N_MELS])
    is_training = tf.compat.v1.placeholder(tf.bool)
    y, timbral, temporal, cnn1, cnn2, cnn3, mean_pool, max_pool, penultimate = models.define_model(x, is_training, model, num_classes)
    normalized_y = tf.nn.sigmoid(y)

# tensorflow: loading model
sess = tf.compat.v1.Session()
sess.run(tf.compat.v1.global_variables_initializer())
saver = tf.compat.v1.train.Saver()
saver.restore(sess, os.path.dirname(__file__)+'/'+model+'/')

# Saving
inputs = {"model/Placeholder_1": x}
outputs = {"model/Sigmoid": normalized_y, "model/dense_1/BiasAdd": y, "model/dense/BiasAdd": penultimate}
tf.compat.v1.saved_model.simple_save(sess, './msd-musicnn-3', inputs, outputs)

I'm executing the code in the musicnn directory and it successfully produces a saved model. But when I try to run

tflite_convert --saved_model_dir=./msd-musicnn-3 --output_file=./msd-musicnn-3.tflite

I get the following error:

2024-04-06 09:10:54.132240: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Traceback (most recent call last):
  File "/home/christian/tf/bin/tflite_convert", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/lite/python/tflite_convert.py", line 690, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/christian/tf/lib/python3.11/site-packages/absl/app.py", line 308, in run
    _run_main(main, args)
  File "/home/christian/tf/lib/python3.11/site-packages/absl/app.py", line 254, in _run_main
    sys.exit(main(argv))
             ^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/lite/python/tflite_convert.py", line 673, in run_main
    _convert_tf2_model(tflite_flags)
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/lite/python/tflite_convert.py", line 274, in _convert_tf2_model
    converter = lite.TFLiteConverterV2.from_saved_model(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/lite/python/lite.py", line 2087, in from_saved_model
    saved_model = _load(saved_model_dir, tags)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/python/saved_model/load.py", line 912, in load
    result = load_partial(export_dir, None, tags, options)["root"]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/python/saved_model/load.py", line 1071, in load_partial
    root = load_v1_in_v2.load(
           ^^^^^^^^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 309, in load
    result = loader.load(
             ^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 290, in load
    signature_functions = self._extract_signatures(wrapped, meta_graph_def)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py", line 185, in _extract_signatures
    signature_fn = wrapped.prune(feeds=feeds, fetches=fetches)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/python/eager/wrap_function.py", line 344, in prune
    lift_map = lift_to_graph.lift_to_graph(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/python/eager/lift_to_graph.py", line 253, in lift_to_graph
    sources.update(op_selector.map_subgraph(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/christian/tf/lib/python3.11/site-packages/tensorflow/python/ops/op_selector.py", line 417, in map_subgraph
    raise UnliftableError(
tensorflow.python.ops.op_selector.UnliftableError: A SavedModel signature needs an input for each placeholder the signature's outputs use. An output for signature 'serving_default' depends on a placeholder which is not an input (i.e. the placeholder is not fed a value).

Unable to lift tensor <tf.Tensor 'model/Sigmoid:0' shape=(None, 50) dtype=float32> because it depends transitively on placeholder <tf.Operation 'model/Placeholder_1' type=Placeholder> via at least one path, e.g.:

model/Sigmoid (Sigmoid)
 <- model/dense_1/BiasAdd (BiasAdd)
 <- model/dense_1/MatMul (MatMul)
 <- model/dropout_1/cond/Identity (Identity)
 <- model/dropout_1/cond (If)
 <- model/batch_normalization_10/batchnorm/add_1 (AddV2)
 <- model/batch_normalization_10/batchnorm/sub (Sub)
 <- model/batch_normalization_10/batchnorm/mul_2 (Mul)
 <- model/batch_normalization_10/batchnorm/mul (Mul)
 <- model/batch_normalization_10/batchnorm/Rsqrt (Rsqrt)
 <- model/batch_normalization_10/batchnorm/add (AddV2)
 <- model/batch_normalization_10/cond_1/Identity (Identity)
 <- model/batch_normalization_10/cond_1 (If)
 <- model/batch_normalization_10/moments/Squeeze_1 (Squeeze)
 <- model/batch_normalization_10/moments/variance (Mean)
 <- model/batch_normalization_10/moments/SquaredDifference (SquaredDifference)
 <- model/batch_normalization_10/moments/StopGradient (StopGradient)
 <- model/batch_normalization_10/moments/mean (Mean)
 <- model/dense/Relu (Relu)
 <- model/dense/BiasAdd (BiasAdd)
 <- model/dense/MatMul (MatMul)
 <- model/dropout/cond/Identity (Identity)
 <- model/dropout/cond (If)
 <- model/batch_normalization_9/batchnorm/add_1 (AddV2)
 <- model/batch_normalization_9/batchnorm/sub (Sub)
 <- model/batch_normalization_9/batchnorm/mul_2 (Mul)
 <- model/batch_normalization_9/batchnorm/mul (Mul)
 <- model/batch_normalization_9/batchnorm/Rsqrt (Rsqrt)
 <- model/batch_normalization_9/batchnorm/add (AddV2)
 <- model/batch_normalization_9/cond_1/Identity (Identity)
 <- model/batch_normalization_9/cond_1 (If)
 <- model/batch_normalization_9/moments/Squeeze_1 (Squeeze)
 <- model/batch_normalization_9/moments/variance (Mean)
 <- model/batch_normalization_9/moments/SquaredDifference (SquaredDifference)
 <- model/batch_normalization_9/moments/StopGradient (StopGradient)
 <- model/batch_normalization_9/moments/mean (Mean)
 <- model/flatten/Reshape (Reshape)
 <- model/concat_2 (ConcatV2)
 <- model/moments/Squeeze (Squeeze)
 <- model/moments/mean (Mean)
 <- model/concat_1 (ConcatV2)
 <- model/Add_1 (AddV2)
 <- model/Add (AddV2)
 <- model/transpose (Transpose)
 <- model/batch_normalization_6/cond/Identity (Identity)
 <- model/batch_normalization_6/cond (If)
 <- model/conv2d_5/Relu (Relu)
 <- model/conv2d_5/BiasAdd (BiasAdd)
 <- model/conv2d_5/Conv2D (Conv2D)
 <- model/Pad_1 (Pad)
 <- model/ExpandDims_1 (ExpandDims)
 <- model/concat (ConcatV2)
 <- model/Squeeze_4 (Squeeze)
 <- model/max_pooling2d_4/MaxPool (MaxPool)
 <- model/batch_normalization_5/cond/Identity (Identity)
 <- model/batch_normalization_5/cond (If)
 <- model/conv2d_4/Relu (Relu)
 <- model/conv2d_4/BiasAdd (BiasAdd)
 <- model/conv2d_4/Conv2D (Conv2D)
 <- model/batch_normalization/cond/Identity (Identity)
 <- model/batch_normalization/cond (If)
 <- model/batch_normalization/cond/Squeeze (Squeeze)
 <- model/Placeholder_1 (Placeholder)

Not sure what to make out of this. It's complaining about missing model/Placeholder_1, but that's what I pass as input.

Any help would be appreciated, thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.