This repository contains the source code to reproduce the results of the ISMIR'19 paper Learning to Generate Music with Sentiment. This paper proposes a deep learning method to generate music with a given sentiment (positive or negative). It uses a new dataset called VGMIDI which contains 95 labelled pieces according to sentiment as well as 728 non-labelled pieces. All pieces are piano arrangements of video game soundtracks in MIDI format.
This project depends on a few python3 modules, so you need to install them first:
$ pip3 install torch torchvision numpy music21
Consiresing you are in the project's root directory, you can install it as follows:
$ python3 setup.py install
The scripts to reproduce the results of the paper are all inside the examples/ directory:
$ python3 examples/generate_shards.py -datadir input/generative/midi/vgmidi/ -data_type midi_perform -shards 3
$ mkdir shards/test/
$ mv shards/test_shard_* shards/test/
$ mv shards input/generative/midi/vgmidi-shards
python3 examples/train_generative.py -train_data input/generative/midi/vgmidi-shards -test_data input/generative/midi/vgmidi-shards/test/test_shard_0.txt -data_type midi_perform -save_path trained/
4. Train a Logistic Recression (LR) classifier using the trained (generative) LSTM hidden layer to encode the labelled midi files and evolve LR weights to generate positive pieces:
python3 examples/train_classifier_unsupervised.py -model_path trained/vgmidi-shards -sent_data_path input/classifier/midi/vgmidi/vgmidi.csv -results_path output/ -sentiment 1
5. Train a Logistic Recression (LR) classifier using the trained (generative) LSTM hidden layer to encode the labelled midi files and evolve LR weights to generate negative pieces:
python3 examples/train_classifier_unsupervised.py -model_path trained/vgmidi-shards -sent_data_path input/classifier/midi/vgmidi/vgmidi.csv -results_path output/ -sentiment 0
You can download the VGMIDI dataset here.