richardyang40148 / midinet Goto Github PK
View Code? Open in Web Editor NEWThis repository contains the source code of MdidNet : A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation
This repository contains the source code of MdidNet : A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation
Hello Richard,
I am trying to train my own npy dataset, but I couldn't find the exact shapes of your dataset in model.py.
Could you tell me the exact shapes of train dataset?
Hi Richard,
Thanks for providing the code. Can you also provide a working example where you generate sound from scratch using a learned model?
I am new to audio domain, and it would be a great help.
I am trying to use the function generation_test() in the utils.py file to generate data from scratch. But it is throwing an error:
FailedPreconditionError: Attempting to use uninitialized value g_h0_prev_conv/w
Hi, @RichardYang40148. Thank you for your great work!
I have a question about generating midi files.
After the model generates 'samples' data, how do I convert the data to midi format files?
hi,I read the musegan' code ,it isn't complete ,I'm confusing about the process to convert midi data to your train data.
Would you like to sharing you code of converting midi data?
Thanks a lot !
I'm trying to replicate midinet as described in your paper but I'm unclear on the format of the data required. Would it be possible to obtain an example dataset for data_X, prev_X and data_Y? Thanks in advance
(n, 1, 16, 128) is the training data shape. What does the 1 mean? Is it the channel for the image where it is set to one because the image is in grayscale?
Hi @RichardYang40148. Thanks for sharing the work you have done. Would it be possible to get a sample processed input data that the model is expecting as input or if you can share how the midi files have been processed initially?
Hi, I don't really understand how 'your_training_data_previous_bar.npy' is constructed. I had read through your report. In your code you said that if the bar is a first bar, its previous bar is an an array of zeros. How do you know that it is a first bar from the training set? Or the previous bar is constructed by shifting all the training data bars to the left and a bar with all zeros is appended in the front.
For example (1,2,3,4,5)-> is your training data, then (0,1,2,3,4)->is your prev bar data? The numbers 1,2,3 indicate the first, second and thrid bar.
Hi,Richard, I'm studing ur research about using CNN to produce musics, it is an outstanding work that attracts me a lot.I found some quesions about it, will you please check it when u are free? It will be my honor.
(1) In ur paper, the discriminator is fed to a prev-bars as conditon,but in the code i cloned from github, there's no such conditions,only 1-D condition which is y is applied.What is the reason about it?
(2)How could i get the training dataset called MideNetV1๏ผ or is there any convenient way to make common Midi files to meet the requirements of the model?
Look forward to ur kind reply !
Hi,
I just wondering are there any pre-trained models for MidiNet? Since I plan to do the transfer learning using this model to generate some fake music.
Cheers,
Suikei
In the Arxiv Preprint, https://arxiv.org/abs/1703.10847, you mention in Section 4.2 that you use a layer that zeros out all outputs except the one with the highest value.
In this implementation, however, you simply apply a sigmoid layer to the output of the generator. Could you please explain why you chose the sigmoidal output?
In your report, you stated
For creating a monophonic note sequence,we added a layer to the end of G to turn off per time step all but the note with the highest activation." May i know how did you implement this in your code? Is it the sigmoid activation function in the last layer?
tf.nn.sigmoid(deconv2d(h4, [self.batch_size, 16, 128, self.c_dim],k_h=1, k_w=128,d_h=1, d_w=2, name='g_h5'))
I am having some trouble understanding what are the labels that have been used while training the network. Specifically I am interested to know what you have used alongside the training data directories in 1e6bf06: MidiNet/v1/model.py, line 136
Would appreciate it if you could add any information on the structure of the dataset, so that others can build their own without touching your implementation
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.