mtg / compiam Goto Github PK
View Code? Open in Web Editor NEWCommon tools for the computational analysis of Indian Art Music
Home Page: https://mtg.github.io/compIAM/
License: GNU Affero General Public License v3.0
Common tools for the computational analysis of Indian Art Music
Home Page: https://mtg.github.io/compIAM/
License: GNU Affero General Public License v3.0
Can you please specify the python version you are using for this version of the package?
Now implementing access to the Dunya corpora I observed that we use, for instance for the concept of "path to file", several different variable names. Of course, is not urgent but we should have in mind that we should standardize that type of parameter names before going public.
I've been thinking that maybe we could have an extractors.py
file in melody/
(or have a file for each extractor within an extractors/ folder), and create the classes for the extractors (e.g. Melodia), and then initialize these in the __init__.py
file. But maybe this is a bit overkill, and we would be adding more classes unnecessarily. We can leave it for the future in case we have more extractors and the init.py file turns into a mess.
I think it'll be useful to already update the README's in melody/
, rhythm/
, etc. also in corpora. That'd help define a complete "contributing" methodology and would also help better tracking the tools and functionalities that the library has at any moment.
The access to Dunya through compiam, to get to know the available recordings, artists, etc. for each subset of the corpora (CC and non-CC) is not working properly. The main issue is that when listing the available artists, for instance, it does return all artists in the database, not specifying to which subset the artists belongs. We might need to re-implement these functions so that the access to Dunya is more intuitive.
At this moment we are not using any formatting convention. We are using code from a lot of different sources, but we could use something like black
to try to automatically reformat all the files into a common style. That would give consistency to the repository before going to public :)
pip install compiam
throws errors
On Colab, upon performing:
pip install compiam
, the following error is thrown:
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
ERROR: Could not find a version that satisfies the requirement compiam (from versions: none)
ERROR: No matching distribution found for compiam
Even https://pypi.org/simple/compiam/ returns a 404.
Python version is v3.8.16
The function extract_segments()
is called in segments.py
but is not defined. The function in which this particular method is called is not used either. Also, file self_sim.py
has quite a lot of unused imports. We should take a look at that and clean these files.
We have to problems in the tests:
On Google Colab, I am unable to successfully install compiam with pip. This appears to be due to a version issue with scikit-learn I think. I have attached the output below.
Python version: 3.10.12
!pip install compiam
Collecting compiam
Using cached compiam-0.2.1.tar.gz (67.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: matplotlib>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from compiam) (3.7.1)
Requirement already satisfied: numpy>=1.20.3 in /usr/local/lib/python3.10/dist-packages (from compiam) (1.23.5)
Requirement already satisfied: librosa==0.8.0 in /usr/local/lib/python3.10/dist-packages (from compiam) (0.8.0)
Requirement already satisfied: SoundFile==0.10.3.post1 in /usr/local/lib/python3.10/dist-packages (from compiam) (0.10.3.post1)
Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.10/dist-packages (from compiam) (1.3.2)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (from compiam) (4.6.0.66)
Requirement already satisfied: pathlib~=1.0.1 in /usr/local/lib/python3.10/dist-packages (from compiam) (1.0.1)
Requirement already satisfied: pytsmod==0.3.3 in /usr/local/lib/python3.10/dist-packages (from compiam) (0.3.3)
Collecting scikit-learn~=0.24.2 (from compiam)
Using cached scikit-learn-0.24.2.tar.gz (7.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
=======================================
!git clone https://github.com/MTG/compIAM.git
1.2.2
from 0.24.2
in the requirements.txt
file.!pip install -r /content/compIAM/requirements.txt
which appeared to work successfully. So I think it might just be a version issue.Right now, we have quite a few prints around, some tools that have a logger, and some tools that do not have any of the two. It would be very nice to standardize the use of a logger throughout all compiam. Let's try to do it ASAP :)
We would like to include a list of Hindustani and Carnatic ragas to the compIAM repo. These will take the form of a YAML file with the following information:
We want to include as many ragas as possible but the most common ragas are a must, at the moment we have the carnatic ragas...
anandabhairavi
begada
bilahari
kalyani
ritigaula
sankarabharanam
todi
varaali
We should move, for instance, mel-spectrogram computation or similar operations to common files to be used in several tools.
I want to be able to access the confidence values for the pitch extraction methods. I can easily do it for Melodia but I'm having a little trouble with FTANet-Carnatic. Is there already a functionality for this? Otherwise I can make a pull request and request the changes to be merged.
Could it be possible to remove torch as an import on model_store/wrappers.py
? Because it'd be good to keep these specific dependencies in the particular files for the models. I did that for the FTANet, by basically creating a function create_model()
in the main class and use it in the wrapper. Maybe that would be also required for the rest of actions that require torch. Therefore, we do not have to import torch and TensorFlow at the same time we load a model through the wrappers.
In case that represents a problem or it's unnatural, it may be good to have a wapper file for each model, or a tf_wrappers.py
and torch_wrappers.py
.
Now we have discarded the "non-trainable" tools from load_model()
(#38). We should, at some point, fix the documentation that I wrote a while ago because the use of the wrappers there is not very clear.
It'd be nice to organize the tools by task inside each module.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.