Coder Social home page Coder Social logo

ml4a's Introduction


ml4a
Machine Learning for Artists

ml4a is a Python library for making art with machine learning. It features:

Example

ml4a bundles the source code of various open source repositories as git submodules and contains wrappers to streamline and simplify them. For example, to generate sample images with StyleGAN2:

from ml4a import image
from ml4a.models import stylegan

network_pkl = stylegan.get_pretrained_model('ffhq')
stylegan.load_model(network_pkl)

samples = stylegan.random_sample(3, labels=None, truncation=1.0)
image.display(samples)

Every model in ml4a.models, including the stylegan module above, imports all of the original repository's code into its namespace, allowing low-level access.

Support ml4a

Become a sponsor

You can support ml4a by donating through GitHub sponsors.

How to contribute

Start by joining the Slack or following us on Twitter. Contribute to the codebase, or help write tutorials.

License

ml4a itself is licensed MIT, but you are also bound to the licenses of any models you use.

ml4a's People

Contributors

ailgun avatar brannondorsey avatar dominikus avatar dromi avatar frnsys avatar genekogan avatar hoh avatar james-oldfield avatar jbn avatar kylemath avatar lkkchung avatar mangtronix avatar mayukhdeb avatar sharkwithlasers avatar squinard avatar thommiano avatar uhzeel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ml4a's Issues

Include other pretrained models

Instead of VGG16, how can I implement f.e. ResNet for Transfer Learning instead?

What structure does my label file need for my custom dataset?

Thanks in advance

New query image issue

I am trying to build a CBIR system based on your work:
https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-search.ipynb
It's very useful. Thank you.

I have question about the query phase.
In this link the input of the query image is randomly picked up from the database.
So the dim-reduced features after PCA-process is used for the distance calculation to find the similar images.

What if the query image is a new image, which doesn't exist in the database?
Do we need to rebuild the feature database with the query image plus the original image database (CNN feature extraction +PCA)? And then calculate the similarity between the feature of query image and features of other images in database?

Looking forward to hearing from you.

Thanks.
Sean

audio-tsne notebook cannot load all the files in Vintage Drum Machine

When running audio-tsne notebook, some loading error happens when loading wav files like below.

Can anyone suggestion the reason, please?

get 1 of 6153 = ../data/Vintage Drum Machines/Korg T3/Bassdrum-01.wav
error loading ../data/Vintage Drum Machines/Korg T3/Rim shot.wav
get 101 of 6153 = ../data/Vintage Drum Machines/X Drum LM8953/Snaredrum-03.wav
get 201 of 6153 = ../data/Vintage Drum Machines/Cheetah MD16/CHTMD15.wav
get 301 of 6153 = ../data/Vintage Drum Machines/Novation Drumstation/Kit07/DS07808Clave.wav
get 401 of 6153 = ../data/Vintage Drum Machines/Novation Drumstation/Kit12/DS12909Hat_O.wav
get 501 of 6153 = ../data/Vintage Drum Machines/Novation Drumstation/Kit13/DS13808CongaHi.wav
get 601 of 6153 = ../data/Vintage Drum Machines/Novation Drumstation/Kit02/DS02808CongaHi.wav
get 701 of 6153 = ../data/Vintage Drum Machines/Novation Drumstation/Kit18/DS18909Kick.wav
get 801 of 6153 = ../data/Vintage Drum Machines/Novation Drumstation/Kit10/DS10808TomMid.wav
get 901 of 6153 = ../data/Vintage Drum Machines/Roland MC-303/Cymbals/Realoh2.wav
get 1001 of 6153 = ../data/Vintage Drum Machines/Boss DR-110/DR-110Snare.wav
get 1101 of 6153 = ../data/Vintage Drum Machines/SH09 Drum sounds/909bass5.wav
get 1201 of 6153 = ../data/Vintage Drum Machines/Yamaha RY30/RY30Hat_O1.wav
get 1301 of 6153 = ../data/Vintage Drum Machines/Yamaha EX5/EX5C 010.wav
get 1401 of 6153 = ../data/Vintage Drum Machines/Boss DR-660/DR-660Hat_C06.wav
get 1501 of 6153 = ../data/Vintage Drum Machines/Boss DR-660/DR-660Perc66.wav
get 1601 of 6153 = ../data/Vintage Drum Machines/Boss DR-660/DR-660Snare14.wav
get 1701 of 6153 = ../data/Vintage Drum Machines/Quasimidi 309/QuasiA 022.wav
get 1801 of 6153 = ../data/Vintage Drum Machines/Quasimidi 309/QuasiB 111.wav
get 1901 of 6153 = ../data/Vintage Drum Machines/Boss DR-202/202per52.wav
get 2001 of 6153 = ../data/Vintage Drum Machines/Boss DR-202/202rim01.wav
get 2101 of 6153 = ../data/Vintage Drum Machines/Simmons SDS5/SDTomNoise.wav
get 2201 of 6153 = ../data/Vintage Drum Machines/Korg KPR-77/Hat Open.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_01.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_15.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_29.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_28.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_14.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_00.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_16.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_02.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_03.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_17.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_13.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_07.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_06.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_12.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_04.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_10.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_11.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_05.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_20.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_08.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_09.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_21.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_23.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_22.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_26.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_27.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_19.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_25.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_31.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_30.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_24.wav
error loading ../data/Vintage Drum Machines/Kawai K3/K3M_18.wav
get 2301 of 6153 = ../data/Vintage Drum Machines/Korg M1/Misc/Tubalar Bell-03.wav
get 2401 of 6153 = ../data/Vintage Drum Machines/Roland D-70/Rim Shot.wav
get 2501 of 6153 = ../data/Vintage Drum Machines/Roland R-8/R8Snare24.wav
get 2601 of 6153 = ../data/Vintage Drum Machines/Roland R-8/R8TomH2.wav
get 2701 of 6153 = ../data/Vintage Drum Machines/Roland R-8/R8Snare38.wav
get 2801 of 6153 = ../data/Vintage Drum Machines/MFB-512/512 loop.wav
get 2901 of 6153 = ../data/Vintage Drum Machines/Boss DR-550/DR550Tom02_M.wav
get 3001 of 6153 = ../data/Vintage Drum Machines/Emu SP12/Toms/Tom L-04.wav
get 3101 of 6153 = ../data/Vintage Drum Machines/Roland TR-808/TR-808Kick13.wav
get 3201 of 6153 = ../data/Vintage Drum Machines/Alesis SR-16/Snaredrums/Snaredrum-02.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_toms/SR16 Tom_Elect 2.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_toms/SR16 Tom_Elect 1.wav
get 3301 of 6153 = ../data/Vintage Drum Machines/Alesis SR15/sr16_kik22/SR16 Kik_Room Lo2.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_kik22/SR16 Kik_Dry08.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_kik22/SR16 Kik_Dry05.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_kik22/SR16 Kik_Elect Hi.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Clave Hi.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Agogo Hi.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Fish Lo.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Fishstick.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Sticks Hi.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Cowbell Hi.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Cabasa.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Clave Lo.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_perc22/SR16 Prc_Shaker.wav
get 3401 of 6153 = ../data/Vintage Drum Machines/Alesis SR15/sr16_snares/SR16 Snr_Dry Deep.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_hats22/SR16 Hat_Vari.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_hats22/SR16 Hat_Closed.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_hats22/SR16 Hat_Tight.wav
error loading ../data/Vintage Drum Machines/Alesis SR15/sr16_hats22/SR16 Hat_Small.wav
get 3501 of 6153 = ../data/Vintage Drum Machines/Roland TR-626/TR-626Hat_O.wav
get 3601 of 6153 = ../data/Vintage Drum Machines/Roland S50/Cymbals/Ride.wav
get 3701 of 6153 = ../data/Vintage Drum Machines/Akai XR-10/Percussion/Cabasa.wav
get 3801 of 6153 = ../data/Vintage Drum Machines/Yamaha TG-33/Bassdrum-03.wav
get 3901 of 6153 = ../data/Vintage Drum Machines/Linn LinnDrum/Cymbals/Hat Open.wav
get 4001 of 6153 = ../data/Vintage Drum Machines/Roland Digital Drum Brain DDR-30/Snaredrum-02.wav
error loading ../data/Vintage Drum Machines/Roland TR-505/Snaredrum.wav
error loading ../data/Vintage Drum Machines/Roland TR-505/Conga H.wav
error loading ../data/Vintage Drum Machines/Roland TR-505/Cowbell L.wav
error loading ../data/Vintage Drum Machines/Roland TR-505/Cowbell H.wav
error loading ../data/Vintage Drum Machines/Roland TR-505/Bassdrum.wav
error loading ../data/Vintage Drum Machines/Roland TR-505/Hat Closed.wav
error loading ../data/Vintage Drum Machines/Roland TR-505/Clap.wav
error loading ../data/Vintage Drum Machines/Roland TR-505/Rimshot.wav
get 4101 of 6153 = ../data/Vintage Drum Machines/Roland TR-909/TR-909Hat C 01.wav
get 4201 of 6153 = ../data/Vintage Drum Machines/Yamaha RM 50/Snaredrums/SNAREDRUM_108.wav
get 4301 of 6153 = ../data/Vintage Drum Machines/Yamaha RM 50/Cymbals/CYMBAL_005.wav
error loading ../data/Vintage Drum Machines/Yamaha RM 50/FX/FX_138.wav
get 4401 of 6153 = ../data/Vintage Drum Machines/Yamaha RM 50/FX/FX_024.wav
error loading ../data/Vintage Drum Machines/Yamaha RM 50/FX/FX_136.wav
error loading ../data/Vintage Drum Machines/Yamaha RM 50/FX/FX_137.wav
get 4501 of 6153 = ../data/Vintage Drum Machines/Yamaha RM 50/Bassdrums/BD-083.wav
get 4601 of 6153 = ../data/Vintage Drum Machines/Yamaha RM 50/Toms/TOMS_021.wav
get 4701 of 6153 = ../data/Vintage Drum Machines/Roland TR-707/Cowbell.wav
get 4801 of 6153 = ../data/Vintage Drum Machines/Rhodes Polaris/Snaredrum-03.wav
get 4901 of 6153 = ../data/Vintage Drum Machines/Alesis DM5/DM5Hat_O08.wav
get 5001 of 6153 = ../data/Vintage Drum Machines/Alesis DM5/DM5Perc018.wav
get 5101 of 6153 = ../data/Vintage Drum Machines/Alesis DM5/DM5FX23.wav
get 5201 of 6153 = ../data/Vintage Drum Machines/Alesis DM5/DM5Tom10_Lo.wav
get 5301 of 6153 = ../data/Vintage Drum Machines/Yamaha CS6/Yamaha CS6 072.wav
get 5401 of 6153 = ../data/Vintage Drum Machines/Yamaha CS6/Yamaha CS6 047.wav
get 5501 of 6153 = ../data/Vintage Drum Machines/Roland JD-990/Cowbell.wav
get 5601 of 6153 = ../data/Vintage Drum Machines/Roland JD800 Dance Card/JD800 Dance Card Drums 053.wav
get 5701 of 6153 = ../data/Vintage Drum Machines/Sequential Drumtacks/DrumTraks/DT Hat Closed.wav
get 5801 of 6153 = ../data/Vintage Drum Machines/Roland R-5/R-5 Timbale.wav
get 5901 of 6153 = ../data/Vintage Drum Machines/Roland System-100/Snaredrum-10.wav
get 6001 of 6153 = ../data/Vintage Drum Machines/Simmons SDS-5/BASSDRUM/Bassdrum-12.wav
get 6101 of 6153 = ../data/Vintage Drum Machines/Yamaha RY-30/Percussion/Woodblock-01.wav
calculated 6091 feature vectors

Trouble with query image on image search

Hi. I would like to ask you about what is the right code to get the query image but not randomly. I got a little bit confused, as for me a beginner. I'm about to finish my college project about image search that need like a query from 1 picture but not randomly.. Thank you..

DQN guide: game.render() is blank

game.render() seems to draw a blank screen for me in the DQN guide. i stepped through the guide and pushed the output to the repo so pull first. maybe this is a 2.7/3 discrepancy again?

'build_target_vecs' not defined

Hi there, I'm getting an error with 'build_target_vecs' (see below) when running the script - is there something obvious I'm missing?

Traceback (most recent call last):
File "C:\Users\Tobias\AppData\Local\Programs\Python\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Users\Tobias\AppData\Local\Programs\Python\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Tobias\AppData\Local\Programs\Python\Python35\lib\site-packages\keras-1.2.2-py3.5.egg\keras\engine\training.py", line 429, in data_generator_task
generator_output = next(self._generator)
File "seq2seq.py", line 149, in generate_batches
y = build_target_vecs()
NameError: name 'build_target_vecs' is not defined

Traceback (most recent call last):
File "seq2seq.py", line 166, in
model.fit_generator(generator=generate_batches(batch_size, one_hot=False), samples_per_epoch=n_examples, nb_epoch=n_epochs, verbose=1)
File "C:\Users\Tobias\AppData\Local\Programs\Python\Python35\lib\site-packages\keras-1.2.2-py3.5.egg\keras\engine\training.py", line 1532, in fit_generator
str(generator_output))
ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

Issue with stylegan.dataset_tool transform parameter

I'm following this ml4a notebook

When I try to run this block:

from ml4a.models import stylegan
  
  # replace 'images_folder' and 'dataset_output' with your own locations.
  
  config = {
      'images_folder': '/pathToMyDataset/',
      'dataset_output': '/pathToDatasetDest/',
      'transform': None,
      'labels': False,
      'size': 256
  }
  
  stylegan.dataset_tool(config)

I get:

TypeError: sequence item 7: expected str instance, NoneType found

This error only occurs when I use 'transform': None.

Json not serializable.

I'm using the audio t-sne notebook and getting a "not json serializable" error when I try to make a json file (both from the vintage drum machine data and my own). Have only used python 2.7 as I can't change the python kernel in the notebook using docker. Output is:


TypeError Traceback (most recent call last)
in ()
6 data = [{"path":os.path.abspath(f), "point":[x, y]} for f, x, y in zip(sound_paths, x_norm, y_norm)]
7 with open(tsne_path, 'w') as outfile:
----> 8 json.dump(data, outfile)
9
10 print("saved %s to disk!" % tsne_path)

/root/miniconda2/lib/python2.7/json/init.pyc in dump(obj, fp, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, encoding, default, sort_keys, **kw)
187 # could accelerate with writelines in some versions of Python, at
188 # a debuggability cost
--> 189 for chunk in iterable:
190 fp.write(chunk)
191

/root/miniconda2/lib/python2.7/json/encoder.pyc in _iterencode(o, _current_indent_level)
429 yield _floatstr(o)
430 elif isinstance(o, (list, tuple)):
--> 431 for chunk in _iterencode_list(o, _current_indent_level):
432 yield chunk
433 elif isinstance(o, dict):

/root/miniconda2/lib/python2.7/json/encoder.pyc in _iterencode_list(lst, _current_indent_level)
330 else:
331 chunks = _iterencode(value, _current_indent_level)
--> 332 for chunk in chunks:
333 yield chunk
334 if newline_indent is not None:

/root/miniconda2/lib/python2.7/json/encoder.pyc in _iterencode_dict(dct, _current_indent_level)
406 else:
407 chunks = _iterencode(value, _current_indent_level)
--> 408 for chunk in chunks:
409 yield chunk
410 if newline_indent is not None:

/root/miniconda2/lib/python2.7/json/encoder.pyc in _iterencode_list(lst, _current_indent_level)
330 else:
331 chunks = _iterencode(value, _current_indent_level)
--> 332 for chunk in chunks:
333 yield chunk
334 if newline_indent is not None:

/root/miniconda2/lib/python2.7/json/encoder.pyc in _iterencode(o, _current_indent_level)
440 raise ValueError("Circular reference detected")
441 markers[markerid] = o
--> 442 o = _default(o)
443 for chunk in _iterencode(o, _current_indent_level):
444 yield chunk

/root/miniconda2/lib/python2.7/json/encoder.pyc in default(self, o)
182
183 """
--> 184 raise TypeError(repr(o) + " is not JSON serializable")
185
186 def encode(self, o):

TypeError: 0.69805652 is not JSON serializable

No module named processing

Hi
I've been trying to use the dataset_utils.py file to use on my preprocessing, but when i run it the following happens:

Traceback (most recent call last):
  File "dataset_utils.py", line 71, in <module>
    from processing import *
ModuleNotFoundError: No module named 'processing'

I ran it in my local machine, and also in a machine in paperspace, as well in a gradient notebook. Nowhere can I find such module, and when I try to pip install it:

# pip install processing
Collecting processing
  Downloading https://files.pythonhosted.org/packages/3b/2d/a6f17cc99d9c45c33eb3eccd6999505d9197b31f0845a845919032262a01/processing-0.52.zip (178kB)
    100% |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 184kB 9.9MB/s
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-wiup4irl/processing/setup.py", line 12
        raise ValueError, 'Versions of Python before 2.4 are not supported'
                        ^
    SyntaxError: invalid syntax

What I'm I missing? it seems to be working fine for other people. Thanks in advance

Scipy depricated imread

In the Lapnorm.py file the load image calls scipy.misc.imread

def load_image(path, h, w):
    img0 = scipy.misc.imread(path, mode='RGB')
    img0 = scipy.misc.imresize(img0, (h, w)).astype('float64')
    return img0

See this issue for reference
https://stackoverflow.com/questions/15345790/scipy-misc-module-has-no-attribute-imread

It looks like there should be a simple way to replace that with the new image loading, I'm not too familiar with python or what that imread normally returns.

I'll submit a fix for in when I figure everything out

genre_scraper.py only scraping a maximum of ~4000 pictures

I opened this issues here aswell because I don't know who maintains the scraper: robbiebarrat/art-DCGAN#18

Basically the scraper is only scraping 3000-4000 pictures of a genre even though it is supposed to have atleast 20k pictures in the case of the portrait and landscape genre.

I think it is because the site is only displaying max 3600 pictures right now online, as can be seen here:
https://www.wikiart.org/en/paintings-by-genre/portrait?select=featured#!#filterName:featured,viewType:masonry (scroll down to load more)

Is there a fix for this?

lstm sampling crashes for small temperatures

just made some updates to the rnn guide, so pull first.

first, a quick fix:

if seed is None and len(seed) < max_len:
    raise Exception('Seed text must be at least {} chars long'.format(max_len))

should actually be the following if i'm not mistaken... (i've changed it)

if seed is not None and len(seed) < max_len:
    raise Exception('Seed text must be at least {} chars long'.format(max_len))

then the rest of the guide runs fine, but the sampling function crashes for me when temperature < 1.0 with the following trace. think somehow the probabilities are not summing to 1? maybe an underflow issue here.


ValueError Traceback (most recent call last)
in ()
8 for temp in [1.0]: #[0.2, 0.5, 1., 1.2]:
9 print('\n\ttemperature:', temp)
---> 10 print(generate(temperature=temp))

in generate(temperature, seed, predicate)
22
23 # sample the character to use based on the predicted probabilities
---> 24 next_idx = sample(probs, temperature)
25 next_char = labels_char[next_idx]
26

in sample(probs, temperature)
33 a = np.log(probs)/temperature
34 a = np.exp(a)/np.sum(np.exp(a))
---> 35 return np.argmax(np.random.multinomial(1, a, 1))

mtrand.pyx in mtrand.RandomState.multinomial (numpy/random/mtrand/mtrand.c:32793)()

ValueError: sum(pvals[:-1]) > 1.0

running PCA crashes my computer

When I try to run PCA in the eigenfaces.ipynb notebook my computer crashes (cold boot). When I watch the system monitor it appears that the CPU is working at over 100% so it seems the problem would be there. I'm running Debian Stretch with an Intel i7-4790K CPU and a GeForce GTX 970 GPU. Here is the specific line of code:

n_components = 100
pca = PCA(n_components=n_components, svd_solver='randomized', whiten=True)
pca.fit(X)

I've tried reducing the components to 50 to no avail.

Any advice on how to diagnose the problem would be greatly appreciated.

Error going into the second epoch

First epoch appears to run through fine, then I get this error when it rolls into the second epoch:

Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\Tobias\AppData\Local\Programs\Python\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Users\Tobias\AppData\Local\Programs\Python\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Tobias\AppData\Local\Programs\Python\Python35\lib\site-packages\keras-1.2.2-py3.5.egg\keras\engine\training.py", line 429, in data_generator_task
generator_output = next(self._generator)
File "seq2seq2.py", line 168, in generate_batches
y = y[idx]
MemoryError

Traceback (most recent call last):
File "seq2seq2.py", line 179, in
model.fit_generator(generator=generate_batches(batch_size, one_hot=False), samples_per_epoch=n_examples, nb_epoch=n_epochs, verbose=1)
File "C:\Users\Tobias\AppData\Local\Programs\Python\Python35\lib\site-packages\keras-1.2.2-py3.5.egg\keras\engine\training.py", line 1532, in fit_generator
str(generator_output))
ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

Any clue what is causing this?

adjustments to torch-dreams example

For torch-dreams example.

Some adjustments that would be nice:

  1. make possible to pass image directly (via ml4a.image.load_image) into config or into the dreamer function, instead of having to save it to disk first. Like in the deepdream and neura_style examples.

  2. put make_custom_func into the wrapper (models/torch-dreams.py). Best to abstract the low-level stuff.

  3. use ml4a native stuff to save/display images to keep things consistent between repos. I've already started that in the example, although i noticed some noisy artifacts that didn't go away with np.clip, maybe you can figure out what's going on there @Mayukhdeb.

Guide "Reinforcement Learning: Deep Q-Networks" always asserts

Hi!
You have very useful lessons/guides about machine learning! (i could tell yours are the best)
But while trying to learn example from "Reinforcement Learning: Deep Q-Networks" code always asserts at:
# debug, "this should never happen"
assert not np.array_equal(state, next_state)

Strange...It looks like environment state is not changing?
When commenting asserts i dont have "wins" at all (this is obvious of course)
Why this can happen and what to do?
You can see code here(at my fork):
https://github.com/mushketer888/ml4a-guides/tree/master/examples/deepqnetwork

Thanks

assertion error in deep-q networks guide

during training, the assertion fails.

training...

AssertionError Traceback (most recent call last)
in ()
26
27 # debug, "this should never happen"
---> 28 assert not np.array_equal(new_state, prev_state)
29
30 agent.remember(prev_state, action, new_state, reward)

AssertionError:

SPADE pretrained model

Hello, it seems that there is a problem with the googledrive links, can I download the pretrained models from somewhere else? Thanks!

image

ROI for input images

It would be useful to specify ROI (Region of interest) for a given image that the image get pre-cropped before feeding into the net. However the final output resolution stay the same as input image.

This is useful when foreground object is small compare to background. Cropping the image helps with mask quality like on image bellow:

IMG_0001

getting ValueError: too many values to unpack when plotting rasterfairy

it's throwing an error in this line:

     10 for img, grid_pos in tqdm(zip(images, grid_assignment)):
---> 11     idx_x, idx_y = grid_pos
     12     x, y = tile_width * idx_x, tile_height * idx_y
     13     tile = Image.open(img)

ValueError: too many values to unpack

the iterator in for img, grid_pos in tqdm(zip(images, grid_assignment)): destructures grid_pos as idx_x, idx_y but by inspecting grid_assignment it has two elements, the first being the 1k images and the second being the dimensions of the grid. i got it to work by changing that line to:

for img, grid_pos in tqdm(zip(images, grid_assignment[0])):

ValueError: Negative dimension size caused by subtracting 3 from 1

First of all, great work with this project, Its been really helpful!

I'm a beginner at Python so I don't know if there is something im doing wrong.

I'm running into a problem when trying to run the convolutional_neural_networks example in notebooks repository

When I try to run the following code:

model.add(Convolution2D(
n_filters, n_conv, n_conv,
border_mode='valid',
input_shape=(1, height, width)
)) 

I run into this error:

Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 594, in call_cpp_shape_fn
status)
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 66, in exit
next(self.gen)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InvalidArgumentError: Negative dimension size caused by subtracting 3 from 1

cheers,
Alberto.

Cartoonization notebook throws shape error with other images

The cartoonization notebook works as is, but it throws the following error when run with other images.

InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument: Incompatible shapes: [1,182,137,64] vs. [1,182,138,64]
	 [[{{node generator/add}}]]
	 [[add_1/_75]]
  (1) Invalid argument: Incompatible shapes: [1,182,137,64] vs. [1,182,138,64]
	 [[{{node generator/add}}]]

The issue seems to be an off-by-one error between the width that the image is downscaled/cropped to and the width that the model expects.

dataset utils w face recognition - get error cannot identify image file 'face/.DS_Store'

Install dependencies, run

python3 dataset_utils.py --input_src face --output_dir cropped --w 256 --h 256 --action face --save_mode output_only --save_ext jpg

get

OSError: cannot identify image file 'face/.DS_Store'

I can get around this by manually removing the .DS_Store, but it always comes back. I feel like when I was reading through the scripts, I saw something regarding DS_Store but now I can't track it down.

cartoonization resolution bug

cartoonization resizes images to have resolution %2==0, need to account for odd-sized images.

from ml4a import image
from ml4a.models import cartoonization

img = image.load_image('https://s3.amazonaws.com/cdn-origin-etr.akc.org/wp-content/uploads/2017/11/26114711/Shiba-Inu-standing-in-profile-outdoors.jpg')
img_toon = cartoonization.run(img)
image.display([img, img_toon])

Tips for improving search result? retraining?

Thanks for the amazing work!

Given I have a fairly specific domain I'm searching, for example always print artwork on tshirts, or pictures of cow patterns.. or whatever, what steps would you recommend to tune the network? some things I thought of for the actual image which helped my results are to reduce noise, trim image automatically to only include the relevant data and apply contrast/sharpness/other general noise removal algorithms..

I've also tried changing the layer (fc1 versus fc2) and tweaking the PCA number of components.
Would you recommend a different network that's supported by keras, like InceptionResNetV2 etc?

But what about tweaking the weights of the actual network vgg16_weights_tf_dim_ordering_tf_kernels.h5 ? would it be worth retraining those weights on my fairly huge and domain-specific image set (200,000+ images)? can you recommend any reading material? what's your thoughts?

How about changing the similarity grouping? I found that cosine worked best for me, euclidean not so much

So I guess to sum it up, given one has a very domain specific image set they are searching, what are some ways to improve the results?

LSTM notebook needs input text

the rnn notebook fails on text = open('input_text.txt', 'r').read() because no input_text.txt is supplied with the guides. perhaps we should include one in the assets folder.

unsupported operand type(s) for +: 'NoneType' and 'NoneType' in dataset_utils.py

Running the following:

python dataset_utils.py --input_src ./img/ --output_dir ./faces/ --save_mode split --action face --w 256 --h 256 --centered --face_crop 0.7

where ./img/ is a directory relative to the script that has jpgs of all images with faces, and ./faces/ is an empty folder on the drive, and ./targetface.jpg is a reference jpg. I get this error:

Traceback (most recent call last):
  File "dataset_utils.py", line 236, in <module>
    main(args)
  File "dataset_utils.py", line 185, in main
    img = img.crop((jx, jy, jx + jw, jy + jh))
TypeError: unsupported operand type(s) for +: 'NoneType' and 'NoneType'

Seems to be an issue with this line:

if face_crop is not None:
            jx, jy, jw, jh = get_crop_around_face(img, target_encodings, out_w/out_h, face_crop, face_crop_lerp)
            img = img.crop((jx, jy, jx + jw, jy + jh))

Looks like my values for jx, jy, jw, and jh are all 'None'.

seq2seq guide: tokenizer crashes

getting this on source_tokenizer.fit_on_texts(en_texts).

TypeError Traceback (most recent call last)
in ()
4
5 source_tokenizer = Tokenizer(max_vocab_size, filters=filter_chars)
----> 6 source_tokenizer.fit_on_texts(en_texts)
7 target_tokenizer = Tokenizer(max_vocab_size, filters=filter_chars)
8 target_tokenizer.fit_on_texts(de_texts)

/usr/local/lib/python2.7/site-packages/Keras-1.0.6-py2.7.egg/keras/preprocessing/text.pyc in fit_on_texts(self, texts)
85 for text in texts:
86 self.document_count += 1
---> 87 seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)
88 for w in seq:
89 if w in self.word_counts:

/usr/local/lib/python2.7/site-packages/Keras-1.0.6-py2.7.egg/keras/preprocessing/text.pyc in text_to_word_sequence(text, filters, lower, split)
30 if lower:
31 text = text.lower()
---> 32 text = text.translate(maketrans(filters, split*len(filters)))
33 seq = text.split(split)
34 return [_f for _f in seq if _f]

TypeError: character mapping must return integer, None or unicode

list index out of range when attempting to use face_processing

running the following:

python dataset_utils.py --input_src ./img/ --output_dir ./faces/ --save_mode split --action face --w 256 --h 256 --centered --face_crop 0.7 --target_face_image ./targetface.jpg

where ./img/ is a directory relative to the script that has jpgs of all images with faces, and ./faces/ is an empty folder on the drive, and ./targetface.jpg is a reference jpg. I get this error:

Traceback (most recent call last):
  File "dataset_utils.py", line 236, in <module>
    main(args)
  File "dataset_utils.py", line 134, in main
    target_encodings = get_encodings(target_face_image) if target_face_image else None
  File "/home/paperspace/Projects/ml4a-guides/utils/face_processing.py", line 23, in get_encodings
    target_encodings = face_recognition.face_encodings(target_face_img)[0]
IndexError: list index out of range

No module named blessings

screen shot 2016-12-14 at 8 40 55 pm

Its showing no module blessings installed. Even though I do have blessings installed. python -c "import blessings" doesn't show any error. Is this a bug?

Q learning

How does explore = 0 imply that the agent is not getting trained? Ideally the _learn function will still be called and the agent will learn. Is it an error or some flaw in my understanding?
screen shot 2016-12-14 at 2 20 42 pm

KeyError in glow.load_model() in GLOW example

hi there ๐ŸŒธ

i tried to run this notebook:

the error occured here:
glow.load_model(optimized=False)

output:


Loaded model

KeyError                                  Traceback (most recent call last)
<ipython-input-6-552a2674b3e4> in <module>()
      1 from ml4a.models import glow
      2 
----> 3 glow.load_model(optimized=False)
      4 glow.warm_start()  # optional: because the first run of the model takes a while, this conveniently gets it out of the way

4 frames
/usr/local/lib/python3.7/dist-packages/ml4a/models/glow.py in load_model(optimized)
    234 
    235     # Encoder
--> 236     enc_x = get_tensor(inputs['enc_x'])
    237     enc_eps = [get_tensor(outputs['enc_eps_' + str(i)]) for i in range(n_eps)]
    238     if not optimized:

/usr/local/lib/python3.7/dist-packages/ml4a/models/glow.py in get_tensor(name)
     28 
     29 def get_tensor(name):
---> 30     return tf.compat.v1.get_default_graph().get_tensor_by_name('import/' + name + ':0')
     31 
     32 

/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py in get_tensor_by_name(self, name)
   3781       raise TypeError("Tensor names are strings (or similar), not %s." %
   3782                       type(name).__name__)
-> 3783     return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
   3784 
   3785   def _get_tensor_by_tf_output(self, tf_output):

/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py in as_graph_element(self, obj, allow_tensor, allow_operation)
   3605 
   3606     with self._lock:
-> 3607       return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
   3608 
   3609   def _as_graph_element_locked(self, obj, allow_tensor, allow_operation):

/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
   3647           raise KeyError("The name %s refers to a Tensor which does not "
   3648                          "exist. The operation, %s, does not exist in the "
-> 3649                          "graph." % (repr(name), repr(op_name)))
   3650         try:
   3651           return op.outputs[out_n]

KeyError: "The name 'import/input/image:0' refers to a Tensor which does not exist. The operation, 'import/input/image', does not exist in the graph."

Unable to load the network in reverse image search

I have an issue with the Reverse image search notebook.
When I try to load the VGG16 (or VGG19) network with
model = keras.applications.VGG16(weights='imagenet', include_top=True)
the download starts but always gets stuck (without any error) before the end and I don't know why.

So I tried to download it directly from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5, put it in the myusername/.keras/models folder and then use
model = keras.models.load_model(pathtothefile)
in order to define the model but it doesn't find the file and I am not even sure that this is supposed to work.

More info for n00bs

It's not mentioned anywhere that wget is required to successfully run the download.sh script.

For the total n00bs like myself, it would be helpful to list pathways to get from zero to running ml4a.

At least for Mac, I find that having Homebrew installed is indispensable.

Maybe I missed the getting started guide?

Windows Python 3, hard work to get running

Hi, I really like your project but I find it very difficult to get running on Windows.

I navigated to the Image t-SNE example from the 'guides' page and saw it was a notebook. '.sh' won't necessarily work. I copied and pasted each line into cmd to put them in action.

I installed all the dependencies and having bumped into errors a few more that were not mentioned at the top of the page (matplotlib, Pillow), but I then the directory the python code looks for isn't there.

Changing that directory to a subfolder in categories get "[Errno] 13: Permission denied."
Despite changing the access permissions to those files I still get the same error. Perhaps this is a uniquely Windows issue.

I'm currently trying to run this example using a docker, which is taking a while to install. It slso means I am using another 1.2GB of SSD space. Its Friday night. I'm sat staring through my fingers.

I am under the impression that you are using Python 2.7 on a Mac, which is why some these lines of code are failing for me. I think you should clarify all of the dependencies required, make a helper .py file that includes them, or make Docker the only option, though I would rather run your work natively in Windows so its easier to extend to my own projects. As a matter of personal preference I would prefer you use Python 3.x too, but beggars can't be choosers,

Thanks

seq2seq and word2vec need keras 2.0 updates

for both of these guides, the base_filter function in keras appears to no longer exist and the following import error is given. on:

from keras.preprocessing.text import Tokenizer, base_filter

ImportError: cannot import name base_filter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.