Coder Social home page Coder Social logo

lakhnes's People

Contributors

chrisdonahue avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lakhnes's Issues

Error 501 when launching RPC server

Hi,

I am trying to use the chiptune synthesis server you provide with your code, but I cannot connect to the server. I installed the required dependencies in a Python 2.7 virtualenv following the instructions of the README, but here is what I get when I launch the server:

$ python data/synth_server.py 1337
Opened chiptune synthesis server on port 1337
127.0.0.1 - - [19/May/2020 14:34:49] code 501, message Unsupported method ('GET')
127.0.0.1 - - [19/May/2020 14:34:49] "GET / HTTP/1.1" 501 -

On the browser itself, the message is similar:

Error response
Error code 501.
Message: Unsupported method ('GET').
Error code explanation: 501 = Server does not support this operation.

Do you have an idea of what could explain such an unexpected behavior?

Best,

Alain

EDIT: synth_client.py seems to work in Python 3 since the following message:

127.0.0.1 - - [19/May/2020 15:28:22] "POST /RPC2 HTTP/1.1" 200 -

is displayed on the terminal where synth_server.py is launched when I launch synth_client.py

[Error] Bug in continuation

I am trying to run the continuations.ipynb but I'm getting this error in the 2nd last cell of the notebook.

Fault Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_11732\4057581347.py in
12 prime_ids = fn_to_ids[fn]
13 assert len(prime_ids) >= primelen + 1
---> 14 paprev(prime_ids[:primelen+genlen], '{}/{}_full.tx1.txt'.format(out_dir, fn))
15
16 prime_ids = prime_ids[:primelen + 1]

~\AppData\Local\Temp\ipykernel_11732\472169501.py in paprev(tx1_ids, fn, displaywav)
34 f.write('\n'.join(tx1))
35
---> 36 wav = tx1_to_wav(tx1)
37
38 wavfp = fn.replace('.tx1.txt', '.wav')

~\AppData\Local\Temp\ipykernel_11732\472169501.py in tx1_to_wav(tx1)
13 f.write('\n'.join(tx1))
14
---> 15 s.tx1_to_wav(tf.name, wf.name)
16 fs, wav = wavread(wf.name)
17

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in call(self, *args)
1110 return _Method(self.__send, "%s.%s" % (self.__name, name))
1111 def call(self, *args):
-> 1112 return self.__send(self.__name, args)
1113
1114 ##

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in __request(self, methodname, params)
1450 self.__handler,
1451 request,
-> 1452 verbose=self.__verbose
1453 )
1454

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in request(self, host, handler, request_body, verbose)
1152 for i in (0, 1):
1153 try:
-> 1154 return self.single_request(host, handler, request_body, verbose)
1155 except http.client.RemoteDisconnected:
1156 if i:

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in single_request(self, host, handler, request_body, verbose)
1168 if resp.status == 200:
1169 self.verbose = verbose
-> 1170 return self.parse_response(resp)
1171
1172 except Fault:

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in parse_response(self, response)
1340 p.close()
1341
-> 1342 return u.close()
1343
1344 ##

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in close(self)
654 raise ResponseError()
655 if self._type == "fault":
--> 656 raise Fault(**self._stack[0])
657 return tuple(self._stack)
658

Fault: <Fault 1: "<type 'exceptions.TypeError'>:must be convertible to a buffer, not PrettyMIDI">

Questions regarding the pre-trained models

Hello, we had a few questions regarding the model weights that have been provided:

  1. We tried to evaluate the Lakh400kPretrainOnly model (using the reproduce_paper_eval.sh script) and got the following results (because we ran this using reproduce_paper_eval.sh, the validation and test sets that we are using are from the nesmdb dataset):
valid loss  1.86 | valid ppl  6.448 | test loss  1.71 | test ppl  5.541 

Can you please confirm that these match with what you were seeing?

  1. We also tried to generate a few samples using the Lakh400kPretrainOnly model. Out of the 25 samples that we generated, 15 samples only had the "WT" and "NO" channel notes and no "P1"/"P2"/"TR" channel notes. Do you have an intuition as to why this might be happening?

  2. We are also having trouble in generating chiptunes from the LakhNES model. The chiptunes don't sound correct to us and we think that we might be missing some steps in the generation process or performing a step incorrectly. Here are the steps that we followed for generating the chiptunes (if you can point to us anything that we are missing/doing incorrectly in this process, it will help us a lot.) :
    a) we downloaded the LakhNES model from the link provided in the repo
    b) we ran python generate.py model/pretrained/LakhNES/ --out_dir ./generated/LakhNES --num 25 (this ran without any errors)
    c) we used tx1_to_midi function (in tx1_midi.py script) to convert the generated tx1 files to midi (we then used the timidity software to listen to these midi files)

If you are able to provide the 775k LakhNES data (by LakhNES data we mean the LakhMIDI examples mapped to NES channels) that you used for pre-training, it will help us a lot. We have generated our own 775k LakhNES examples using your scripts but because there is some randomness involved in mapping the instruments, we are not entirely sure if what we have matches with what you had used.

Thank you in advance!

ModuleNotFoundError: No module named 'mem_transformer'

Traceback (most recent call last):
File "generate.py", line 49, in
model = torch.load(f)
File "/Users/ale/Desktop/LakhNES/LakhNES-model/lib/python3.7/site-packages/torch/serialization.py", line 368, in load
return _load(f, map_location, pickle_module)
File "/Users/ale/Desktop/LakhNES/LakhNES-model/lib/python3.7/site-packages/torch/serialization.py", line 542, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'mem_transformer'

Preparing the NES data

Would it be possible to add information about how exactly to pre-process the NES data for fine-tuning the model? The data is already available for download in the TX1 format, but it's not clear if it's already augmented, if augmentation needs to be done by running tx1_paper_augment on every file, or whether this will be done on the fly during training.

LakhNES license

Hello! What a wonderful project, thank you for it!

I was wondering if you could explain how it can be used in terms of licensing. There isn't any license and, I assume, others (me included) have no rights to it. Could you please elaborate on how it can be used? Could you please add an explicit license, if feasible?

Thank you!

Synthesis very slow

On my machine, the audio synthesis takes considerably longer (2 min 22 s to generate a 15s sample from the test set) than running the model itself (9 s for 512 tokens). (Is it because the synthesis engine is written in Python?)

Is there any faster way to listen to the outputs? When I try listening to the MIDI file directly, it doesn't sound correct (most of the notes are not even audible).

[Win] [Error] Permission denied in continuations.ipynb

I'm trying to run the continuations.ipynb but I'm getting this error.

Fault Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_2120\4057581347.py in
12 prime_ids = fn_to_ids[fn]
13 assert len(prime_ids) >= primelen + 1
---> 14 paprev(prime_ids[:primelen+genlen], '{}/{}_full.tx1.txt'.format(out_dir, fn))
15
16 prime_ids = prime_ids[:primelen + 1]

~\AppData\Local\Temp\ipykernel_2120\472169501.py in paprev(tx1_ids, fn, displaywav)
34 f.write('\n'.join(tx1))
35
---> 36 wav = tx1_to_wav(tx1)
37
38 wavfp = fn.replace('.tx1.txt', '.wav')

~\AppData\Local\Temp\ipykernel_2120\472169501.py in tx1_to_wav(tx1)
13 f.write('\n'.join(tx1))
14
---> 15 s.tx1_to_wav_og(tf.name, wf.name)
16 fs, wav = wavread(wf.name)
17

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in call(self, *args)
1110 return _Method(self.__send, "%s.%s" % (self.__name, name))
1111 def call(self, *args):
-> 1112 return self.__send(self.__name, args)
1113
1114 ##

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in __request(self, methodname, params)
1450 self.__handler,
1451 request,
-> 1452 verbose=self.__verbose
1453 )
1454

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in request(self, host, handler, request_body, verbose)
1152 for i in (0, 1):
1153 try:
-> 1154 return self.single_request(host, handler, request_body, verbose)
1155 except http.client.RemoteDisconnected:
1156 if i:

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in single_request(self, host, handler, request_body, verbose)
1168 if resp.status == 200:
1169 self.verbose = verbose
-> 1170 return self.parse_response(resp)
1171
1172 except Fault:

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in parse_response(self, response)
1340 p.close()
1341
-> 1342 return u.close()
1343
1344 ##

~\AppData\Local\Programs\Python\Python37\lib\xmlrpc\client.py in close(self)
654 raise ResponseError()
655 if self._type == "fault":
--> 656 raise Fault(**self._stack[0])
657 return tuple(self._stack)
658

Fault: <Fault 1: "<type 'exceptions.IOError'>:[Errno 13] Permission denied: 'c:\\users\\affan\\appdata\\local\\temp\\tmpka1d6d'">

How do I finetune with my own MIDI folder?

I can generate files fine now with the checkpoint 400k provided but I don't understand the instructions on how to finetune for my own MIDI folder instead of NES MIDIs.
Can you help me please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.