Coder Social home page Coder Social logo

Comments (10)

DatanIMU avatar DatanIMU commented on May 29, 2024 1

Great, it sounds smoothly, hahaha

from muzic.

DatanIMU avatar DatanIMU commented on May 29, 2024

hahaha, it works

due to no lead in my midi file. I change vercol into chords (clavichord, it is what I can find in musescore software).

then I use:

  • Stage 1: chord -> lead (Select condition tracks I click ‘cp’; Select content tracks I click ‘clp’) (to generate lead)
  • Stage 2: chord, lead -> bass, drum, guitar, piano, string

then a fresh midi file generated. it sounds interesting.

from muzic.

trestad avatar trestad commented on May 29, 2024

I'm glad to hear that you were able to successfully run our code. However, I'd like to point out a few things.

Firstly, the instrument name for "lead" should be "square wave synthesizer," which can be found under the directory of electronic music.

Secondly, when generating, you don't need to specify the content again if it's already included in the conditions. Just 'cp->l' is ok.

Lastly, in our code, the "chord" is played by a piano with the program set to 1. So converting the vercol to a clavichord won't assist in your generation. In fact, your clavichord will be filtered out. The code runs successfully because you already have the piano, and our code automatically infers the chord based on the piano.

I suggest you change your sax to a legal 'lead' using the following code:
import miditoolkit
midi_file = YOUR-MIDI-FILE-PATH
midi_obj = miditoolkit.midi.parser.MidiFile(midi_file)
# find the track you want to modify progam number
for inst in midi_obj.instruments:
print(inst)
for inst in midi_obj.instruments:
if inst.program == YOUR-SAX-PROGRAM-NUMBER:
inst.program = 80 # CHANGE IT TO SQUARE-WAVE SYTHESIZER
midi_obj.dump(save_path) # SAVE YOUR MODIFIED MIDI FILE

Additionally, as we mentioned in Section 3.6 in the README, it's important to consider that a saxophone melody can significantly differ from the lead melody played by the square wave synthesizer in our training data. Directly assigning the saxophone to play as the lead melody may result in a substantial domain gap and potentially compromise the quality of the generated output.

from muzic.

DatanIMU avatar DatanIMU commented on May 29, 2024
  1. I have a midi only have Vocal (https://musescore.com/user/34889258/scores/6130079)
  2. I replace instrument by Square Synthesiser, without modify “name on main score” and “abbvreviated name”. they are default value. (Mallet Synthesiser, Square Synthesiser) (Mal. Syn.)
  3. python track_generation.py --load_path /path-of-checkpoint --file_path example_data/inference ----------> "l" and "dgp"
  4. the python generate as the same song as original singer's song. I mean the lead part is same. 'dgp' part almost as the same.
  5. is it due to this song is in your trainning data, or I made some misstake.

from muzic.

trestad avatar trestad commented on May 29, 2024

First, it is true that it is unnecessary to specify a track as a content track if it has served as a condition track.

I am a little confusing: since this midi only has vocal, how to evaluate the similarity of the generated 'dgp' part? You mean it sounds almost same as the original accompaniment of this song? The lead should be same because we do not modify the condition tracks. As for similar 'dgp' part, if this song lacks any accompaniment, we are sure to filter it out during our preprocessing process.

from muzic.

DatanIMU avatar DatanIMU commented on May 29, 2024

if I put lead in condition, the generation sounds almost same as the original accompaniment of this song.

Hahaha,
if leave condition tracks empty, then a new one comes. It's very interesting, however, it's likely written by an un-normal composer.

Here is my opinion:
Music that is loved by people is because it resonates with humans. This resonance is like the resonance phenomenon in the physical world. When two identical frequencies affect each other, they are moved by the sound source and will also be moved by it. Therefore, I believe that when people are happy, their heart beats quickly, so fast songs are more likely to move humans. When humans are sad, their heart rate and various hormone levels are low, so no song that evokes emotions or sadness is fast-paced. So what I mean is that you can combine human pulse as a parameter for AI inference, which makes it easier to produce better works.

For the first time, humans first had dance before they had songs. At first, humans would celebrate hunting by dancing on bonfires, then jumping and shouting blindly. Later, beautiful singing developed. Some rhythmic songs can almost be associated with dance movements when listening to the song, or in other words, during a concert, the audience will unconsciously dance with them. So what I mean is that we can broaden our training ideas and not just consider using songs to train songs. The true source of songs is dance movements. We can use dance movements as training objects, that is, using images with time frames to generate music. This method is more likely to generate a good concert.

from muzic.

DatanIMU avatar DatanIMU commented on May 29, 2024

I think use dancing pictures in frame is much more easier to train AI model than others .

from muzic.

trestad avatar trestad commented on May 29, 2024

Thank you for your engaging and interesting advice! Perhaps we can draw inspiration from it.

By the way, you can try this and see whether the quality becomes better: input the midi file and set condition track as 'c' rather than leaving condition tracks empty, the code will infer and condition on the chord progression from the lead melody in your input rather than directly conditioning on the lead melody. Empirically, we find chord guidance makes the generation have more regular pattern and better melodic quality.

from muzic.

DatanIMU avatar DatanIMU commented on May 29, 2024

the input midi file only have lead (replaced by vocal). I mean there is only one track in my midi, now I configure it as lead.
Do you mean I use 'c' as condition tracks?

from muzic.

trestad avatar trestad commented on May 29, 2024

Yes, even the midi has only lead track, you can set the condition as 'c' which will be inferred automatically. Subsequently, GETMusic can generate tracks you want following the chord progression inferred by the lead melody rather than directly conditioning on the lead melody.

Details can be found here:

if (0 < note[3] < 128) and (note[2] in [0,25,32,48,80]):

from muzic.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.