Coder Social home page Coder Social logo

dcase2017-baseline-system's People

Contributors

emrcak avatar toni-heittola avatar wetdog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dcase2017-baseline-system's Issues

The parameter is incorrect

When I run the baseline demo, I have the problem given below. I did not change anything, I don't know whether someone can help?
Thanks in advance.

(C:\ProgramData\Anaconda3) E:\competitions\TUT-sound\baseline\DCASE2017-baseline-system-master>python applications/task1.py
[I] DCASE 2017::Acoustic Scene Classification / Baseline System
[I]
[I] Initialize [Development setup][folds]
[I] ==================================================
[I]
[I] System
[I] Name : DCASE 2017::Acoustic Scene Classification / Baseline System
[I] Description : DCASE2017 baseline (CPU) using DCASE2017 task 1 development dataset
[I] Parameter set : dcase2017
[I] Setup : Python[3.6.1], Numpy[1.12.1], sklearn[0.18.1], Keras[2.0.5], Theano[0.9.0], Librosa[0.5.1]
[I] Dataset
[I] Name : TUT-acoustic-scenes-2017-development
[I] Active folds : [1, 2, 3, 4]
[I] Evaluator
[I] Save path : applications\system\task1\evaluator
[I] DONE [0:00:00.292879 ]
[I]
[I] Feature extractor
[I] ==================================================
[I]
[I] DONE [0:00:00.837229 ] [4680 items]
[I]
[I] Feature normalizer
[I] ==================================================
[I]
[I] DONE [0:00:00.016056 ]
[I]
[I] System training
[I] ==================================================
[I]
Fold : 0%| | 0/4 [00:00<?, ?it/s][D] Validation set statistics
[D] Scene label | Validation amount (%)
[D] -------------------- + --------------------
[D] beach | 12.82
[D] bus | 10.26
[D] cafe/restaurant | 12.82
[D] car | 12.82
[D] city_center | 12.82
[D] forest_path | 11.54
[D] grocery_store | 12.82
[D] home | 15.38
[D] library | 14.10
[D] metro_station | 10.26
[D] office | 10.26
[D] park | 14.10
[D] residential_area | 12.82
[D] train | 10.26
[D] tram | 12.82
[D]
[D] Training items [1540575]
[D] Validation items [217935]
[D] Keras
[D] Backend [theano]
[D] BLAS library [MKL] (Threads[1], MKL_CBWR[COMPATIBLE])
[D] Theano
[D] Device [cpu]
[D] floatX [float64]
[D] Optimizer [None]
[D] OpenMP [False]
[D]
Using Theano backend.
[WinError 87] The parameter is incorrect

How to evaluate (using sed_eval toolbox) the devtest/evaltest files with no target events (no Onset/Offset time)

Dear @toni-heittola @emrcak,
I am stuck in the evaluation part of the Rare sound event detection task (DCASE 2017 Task 2 challenge). I can see that in all three dataset parts (devtrain, devtest, and evaltest) approx. 50% of files/samples are with no target sound events i.e., there is no Onset and Offset time. So, I am facing a problem in preparing the reference_event_list and estimated_event_list, which are required as input parameters for sed_eval toolbox. In official DCASE challenge page files with no detected event as also required in the following format:
[filename (string)]
If I Include this kind of file entry (with empty or no onset and offset) in reference_event_list and estimated_event_list then I am getting an empty slice error from the sed_eval toolbox. As a workaround, I am excluding such files during the training, validation, and testing phase but my score is pretty low.
Do I need any post-processing to avoid such errors? Kindly, help me to understand the process to handle such a situation/condition.

Best Regards,

Code is not moving forward at Feature Normaliser step

Hi, I am running the code in AWS. Code is triggered from Task1.py. It is running fine from terminal. But whenever I am running the same from Rshiny app, it is getting stuck at following point.

method_progress = tqdm(current_normalizer_files,
desc=' {0: >15s}'.format('Feature method(From Here) '),
file=sys.stdout,
leave=False,
miniters=1,
disable=self.disable_progress_bar,
ascii=self.use_ascii_progress_bar)

It just hangs at this point. This has to do with tqdm I am not sure.

You can see
{'mfcc':Dir/Applications/system/task1/feature_normalizer/feature_extractor_a3d3864c319bc59fa2956d12a34e2900/scale_fold0.cpickle',
'mfcc_delta': 'Dir/Applications/system/task1/feature_normalizer/feature_extractor_f17897bd2a133d1c1d1c853e491d2a3a/scale_fold0.cpickle',
'mfcc_acceleration': '/Dir/Applications/system/task1/feature_normalizer/feature_extractor_68a40f5e3b77df9564aaa68c92e95be9/scale_fold0.cpickle'}

Getting loaded properly.

Please let me know if there is something wrong in it.
Thanks in advance :)

python applications/task2.py, but error

Hi, when I run task2.py, but error happens. It display 'utf-8' codec can't decode byte 0xb0 in position 189: invalid start byte
I have set project encoding as 'utf-8' but it doesn't work. I also tried run 'python applications/task2.py -n' , but it still doesn't work.

[D] Training items [661941]
[D] Validation items [75050]
[D] Keras
[D] Backend [theano]
[D] BLAS library [MKL] (Threads[1], MKL_CBWR[COMPATIBLE])
[D] Theano
[D] Device [cpu]
[D] floatX [float64]
[D] Optimizer [None]
[D] OpenMP [False]
[D]
'utf-8' codec can't decode byte 0xb0 in position 189: invalid start byte

[Win10+Anaconda]Weird result obtained, where should I look first?

Hello,

I setup the DECASE2017 baseline at my system (Win10, 64bit, Anaconda, Python 2.7)
It seems that every requirements were installed properly and No error was shown during operation.
However, final results looks really weird, since all F1 score shows NaN.
Here is my command to obtain the below's results

$ python ./applications/task2.py -o -s dcase2017_gpu

image

It would be great if you share your opinion where should I look first to solve this issue.

Best regards,

How feature extraction is implemented on 10s audio?

For the acoustic scene classification task, according to the documention,

Frame size: 40 ms (with 50% hop size)

Feature vector: 40 log mel-band energies in 5 consecutive frames = 200 values

Classification unit: one file (10 seconds of audio).

But the audio is 10 second, which will have 500 frames (considering the overlap). How does the baseline system choose the 5 frames from the 10-second-audio? Thanks!!

Why are you using 'input_dim' in KerasMixin.create_model()?

Hi,
I am following your format of defining the neural network architecture in the parameters file and letting KerasMixin_create_model() building it, because I think it is clever. In the create_model function, the variable name to set up the dimension of the input data is input_dim.

My network uses Keras.layers.Conv1d, which, when using input_dim, creates a wrong number of parameters. When I use instead the parameter input_shape, the network is okay.

I understand the fully connected network that you released as baseline is set up using input_dim, but I have checked that it can also be set up with input_shape (if the value is in a tuple). Therefore, I would like to know if there is a reason to use input_dim instead of input_shape

I copied below the summary of networks built with input_dim and input_shape.

Fully connected networks

The first one is using input_dim as by default in your code. The second case is using input_shape, but note that the value of this argument is the tuple (4400,). And the third is also using input_shape with the tuple (4400,1) and the resulting number of arguments is wrong.

layer_setup['config'] = {'activation': 'relu', 'input_dim': 4400, 'kernel_initializer': 'uniform', 'units': 50}
self.model = Sequential()
self.model.add(LayerClass(**dict(layer_setup.get('config'))))
self.model.summary()

layer_setup['config'] = {'activation': 'relu', 'input_shape': (4400,), 'kernel_initializer': 'uniform', 'units': 50}
self.model = Sequential()
self.model.add(LayerClass(**dict(layer_setup.get('config'))))
self.model.summary()

layer_setup['config'] = {'activation': 'relu', 'input_shape': (4400,1), 'kernel_initializer': 'uniform', 'units': 50}
self.model = Sequential()
self.model.add(LayerClass(**dict(layer_setup.get('config'))))
self.model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_15 (Dense)             (None, 50)                220050    
=================================================================
Total params: 220,050.0
Trainable params: 220,050
Non-trainable params: 0.0
_________________________________________________________________
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_16 (Dense)             (None, 50)                220050    
=================================================================
Total params: 220,050.0
Trainable params: 220,050
Non-trainable params: 0.0
_________________________________________________________________
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_17 (Dense)             (None, 4400, 50)          100       
=================================================================
Total params: 100.0
Trainable params: 100
Non-trainable params: 0.0
_________________________________________________________________

Convolutional networks

In the first case, I am using input_dim as it would be by default in the code. Note the output dimension of the network and the number of parameters. And note also that in the Keras 2 API input_dim has been deprecated.

In the second case, commented out here, I use input_shape = (4400,), but there is an error as Conv1d expects 3 dimensions and it is not possible to add the layer.

In the third case, I use input_shape=(4400,1) and the resulting network is fine.

layer_setup['config'] = {'filters': 32, 'kernel_size': 64, 'input_dim': 4400}
self.model = Sequential()
self.model.add(LayerClass(**dict(layer_setup.get('config'))))
self.model.summary()

#layer_setup['config'] = {'filters': 32, 'kernel_size': 64, 'input_shape': (4400,)}
#self.model = Sequential()
#self.model.add(LayerClass(**dict(layer_setup.get('config'))))
#self.model.summary()

layer_setup['config'] = {'filters': 32, 'kernel_size': 64, 'input_shape': (4400, 1)}
self.model = Sequential()
self.model.add(LayerClass(**dict(layer_setup.get('config'))))
self.model.summary()

/Users/JL/Documents/SMC10/Master-Thesis/Reference-code/DCASE2017-modified/dcase_framework/learners.py:3: UserWarning: Update your `Conv1D` call to the Keras 2 API: `Conv1D(input_shape=(None, 440..., kernel_size=64, filters=32)`
  """
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_5 (Conv1D)            (None, None, 32)          9011232   
=================================================================
Total params: 9,011,232.0
Trainable params: 9,011,232
Non-trainable params: 0.0
_________________________________________________________________
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_6 (Conv1D)            (None, 4337, 32)          2080      
=================================================================
Total params: 2,080.0
Trainable params: 2,080
Non-trainable params: 0.0
_________________________________________________________________

Do you plan to keep input_dim and should I find a solution for my case or do you want to apply some changes to it?
Thank you so much

Custom Task 1 doesn't work.

The custom_task1 in examples folder doesn't work for me as librosa.logamplitude has been removed in librosa v0.6.
I am working on a fix and would submit a pull request.

python applications/task3.py crashes

Today when i ran this command "python task3.py -n", it turns out the error like:

[D] Feature vector [200]
[D] Batch size [256]
[D] Epochs [200]
[I] Training
[I] | Loss | Metric |
[I] | binary_crossentropy | binary_accuracy |
[I] Epoch | Train | Val | Train | Val | Time
[I] ----- + -------- + -------- + -------- + -------- + ---------------
Traceback (most recent call last):
File "task3.py", line 294, in
sys.exit(main(sys.argv))
File "task3.py", line 228, in main
app.system_training()
File "/home/zwe/Downloads/DCASE2017-baseline-system-master/dcase_framework/decorators.py", line 38, in function_wrapper
to_return = func(*args, **kwargs)
File "/home/zwe/Downloads/DCASE2017-baseline-system-master/dcase_framework/application_core.py", line 2214, in system_training
validation_files=validation_files
File "/home/zwe/Downloads/DCASE2017-baseline-system-master/dcase_framework/learners.py", line 2468, in learn
class_weight=class_weight
File "/root/anaconda3/lib/python3.7/site-packages/keras/engine/training.py", line 1239, in fit
validation_freq=validation_freq)
File "/root/anaconda3/lib/python3.7/site-packages/keras/engine/training_arrays.py", line 192, in fit_loop
callbacks._call_batch_hook('train', 'begin', batch_index, batch_logs)
File "/root/anaconda3/lib/python3.7/site-packages/keras/callbacks/callbacks.py", line 84, in _call_batch_hook
batch_hook = getattr(callback, hook_name)
AttributeError: 'ProgressLoggerCallback' object has no attribute 'on_train_batch_begin'

I just checked the code i downloaded from the github and the file "keras_utils.py" has changed according to the commit 14f4e3d, and i don't konw whether it is a problem with Keras version again?

testing model

hi,When I read the test model of Task1,the steps in application_core.py are "# Feature stacking" ,"# Normalize features","# Aggregate features","# Frame probabilities","# Scene recognizer",
but I can't find the step for "How data works through neural networks" ?

python applications/task3.py crashes

[I] System training
[I] ==================================================
[I]
Fold : 0%| | 0/4 [00:00<?, ?it/s'ascii' codec can't decode byte 0xda in position 1: ordinal not in range(128) | 0/1 [00:00<?, ?it/s]

DataSequencer.process()

if i understand correctly
line322:
segment_end_frame = segment_start_frame + self.hop_size
should be:
segment_end_frame = segment_start_frame + self.frames

the same with line319

Two channel

How to keep two channel features?
if I setup
mfcc:
mono:false
Stacking_recipe: mfcc
it wont go with two channel, and if i setup like
Stacking_recipe: mfcc=0;mfcc=1
it will go two channel, but stacked instead.
so How can i get two channel without stack?

Task1.py crash

I'm running the baseline for Task 1 on an ADA cluster. During training, it reaches 72% in the first fold before it throws the "Killed" error.
Anything that can be done in this regard?

Input format for dcase_util.data.DecisionEncoder Class - Doubt

Hi @toni-heittola ,
I was going through the dcase_util package doc and under Decision encoding section it says: 'DecisionEncoder class (dcase_util.data.DecisionEncoder) can used to process binary 2D data matrix (class, time) with frame wise activity'. In my case (the project I am working on) the prediction output I am getting is (time, class) format. So, I was thinking if this is Ok to pass 2D matrix (to dcase_util.data.DecisionEncoder) as (time, class) format or not as I am getting very poor score both from segment based and event based evaluation metrices (ER & F1).

Best Regards

Issue of running task2_cakir.py in the example folder.

When I try to run "python3 task2_cakir.py" in the example folder, I got the following error.

[Errno 2] No such file or directory: '/wrk/cakir/DONOTREMOVE/DCASE2017_task_2/feature_extractor/dataset_9b4fa58bf77c506a30da403feee94c39/parameters.yaml'

Error when running the "keras_seq.py" example from command-line (and only from command-line)

Description

Error when running the keras_seq.py example on Windows 10 (task2.py also doesn't work) when running it from the cmd.exe, but not when I do the same thing from Sublime Text 3. I've never seen this kind of error before and online support is not helpful.

Sorry to make you review a bug that's probably not your fault, and thanks for your time developing this library.

Steps/Code to Reproduce

Run python keras_seq.py -s dcase2017, dcase2017_gpu and maybe others.

Expected Results

When I run the keras_seq.py example from the Sublime Text 3 build Ctrl+B command, it works just fine. The following code it the output printed right after the bug would occur when running from cmd.
PS: the YAML file was modified for Keras to use the Tensorflow backend, since Theano was discontinued last year and the latest (and last) version (1.0.0) is not compatible with the library (and probably with dcase_utils, too).

[D]   Validation
[D]     Event label          | Files (%)            
[D]     -------------------- + -------------------- 
[D]     -                    | 10.08 
[D]     babycry              | 10.29 
[D]  
[D]   Training items 	[661691]
[D]   Validation items 	[75050]
[D]   Keras
[D]     Backend 	[tensorflow]
[D]     BLAS library	[MKL]		(Threads[4], MKL_CBWR[COMPATIBLE])
[D]   Tensorflow
[D]     Device 		[gpu]
[D]   
[D]   Model summary
[D]     Layer type      | Output               | Param   | Name                   | Connected to                | Activ.  | Init   
[D]     --------------- + -------------------- + ------  + ---------------------  + --------------------------- + ------- + ------
[D]     Dense           | (None, 50)           | 10050   | dense_1                | dense_1_input[0][0]         | relu    | uniform
[D]     Dropout         | (None, 50)           | 0       | dropout_1              | dense_1[0][0]               | ---     | ---    
[D]     Dense           | (None, 50)           | 2550    | dense_2                | dropout_1[0][0]             | relu    | uniform
[D]     Dropout         | (None, 50)           | 0       | dropout_2              | dense_2[0][0]               | ---     | ---    
[D]     Dense           | (None, 1)            | 51      | dense_3                | dropout_2[0][0]             | sigmoid | uniform
[D]   
[D]   Parameters
[D]     Trainable	[12,651]
[D]     Non-Trainable	[0]
[D]     Total		[12,651]
[D]   
[D]   Positives items 	[27055]	(4.09 %)
[D]   Negatives items 	[634636]	(95.91 %)
[D]   Class weights 	[None]	
[D]   Feature vector 	[200]
[D]   Batch size 	[256]
[D]   Epochs 		[200]

Actual Results

[I] System training
[I] ==================================================
[I]
           Fold           :   0%|                                                                  | 0/1 [00:00<?, ?it/s][D]   Validation     Event :   0%|                                                                  | 0/1 [00:00<?, ?it/s] [D]     Event label          | Files (%)
[D]     -------------------- + --------------------
[D]     -                    | 10.37
[D]     babycry              | 10.81
[D]
[D]   Keras
[D]     Backend         [theano]
[D]     BLAS library    [MKL]           (Threads[4], MKL_CBWR[COMPATIBLE])
[D]   Theano
[D]     Device          [gpu]
[D]     floatX          [float32]
[D]     Optimizer       [fast_run]
[D]     NVCC fastmath   [True]
[D]     OpenMP          [True]
[D]
D:\Programs\Miniconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using Tensorflow backend.
[WinError 1] Incorrect function

Versions

>>> import platform; print(platform.platform())
Windows-10-10.0.16299-SP0
>>> import sys; print("Python", sys.version)
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.14.2
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.0.1
>>> import matplotlib; print("Matplotlib", matplotlib.__version__)
Matplotlib 2.2.2
>>> import librosa; print("librosa", librosa.__version__)
librosa 0.6.0
>>> import keras; print("keras", keras.__version__)
D:\Programs\Miniconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
keras 2.1.5

results of task3

Why do we only include street evaluation datasets that contain reference data published for task 3,but not include the description in event class-wise results in https://tut-arg.github.io/DCASE2017-baseline-system/for detailed instruction,Brakes squeaking ,car,children,large vehicle,people speaking,people walking.How to get other datasets in the result?Looking forward to your reply~

Typo in dcase_framework/features.py comments

In line 132, line 140 and line 1416, should it be normalizer.normalize() instead of normalizer.normalizer()? I can only find the definition of normalize() in class FeatureNormalizer.

KeyError: 'libraries' in numpy.__config__.blas_opt_info

Hi,
I am running task1.py on OSX and I get the KeyError in the blas_opt_info when keras is setup. When I run it on linux, I do not get this error.

For reference:

Traceback (most recent call last):
  File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1596, in <module>
    globals = debugger.run(setup['file'], None, None, is_module)
  File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 974, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Users/JL/Documents/SMC10/Master-Thesis/Reference-code/DCASE2017-baseline-system/applications/task1.py", line 287, in <module>
    sys.exit(main(sys.argv))
  File "/Users/JL/Documents/SMC10/Master-Thesis/Reference-code/DCASE2017-baseline-system/applications/task1.py", line 223, in main
    app.system_training()
  File "/Users/JL/Documents/SMC10/Master-Thesis/Reference-code/DCASE2017-baseline-system/dcase_framework/decorators.py", line 38, in function_wrapper
    to_return = func(*args, **kwargs)
  File "/Users/JL/Documents/SMC10/Master-Thesis/Reference-code/DCASE2017-baseline-system/dcase_framework/application_core.py", line 1245, in system_training
    learner.learn(data=data, annotations=annotations)
  File "/Users/JL/Documents/SMC10/Master-Thesis/Reference-code/DCASE2017-baseline-system/dcase_framework/learners.py", line 1032, in learn
    self._setup_keras()
  File "/Users/JL/Documents/SMC10/Master-Thesis/Reference-code/DCASE2017-baseline-system/dcase_framework/learners.py", line 528, in _setup_keras
    blas_libraries = numpy.__config__.blas_opt_info['libraries']
KeyError: 'libraries'

I have found a similar issuse. In it, the developer, said it was because numpy is using OSX accelerate BLAS, which misses the libraries key in the dict.

This is the output of numpy.show_config():

blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
  NOT AVAILABLE
atlas_3_10_blas_threads_info:
  NOT AVAILABLE
atlas_3_10_blas_info:
  NOT AVAILABLE
atlas_blas_threads_info:
  NOT AVAILABLE
atlas_blas_info:
  NOT AVAILABLE
blas_opt_info:
    extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers']
    extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
    define_macros = [('NO_ATLAS_INFO', 3), ('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
  NOT AVAILABLE
atlas_3_10_threads_info:
  NOT AVAILABLE
atlas_3_10_info:
  NOT AVAILABLE
atlas_threads_info:
  NOT AVAILABLE
atlas_info:
  NOT AVAILABLE
lapack_opt_info:
    extra_compile_args = ['-msse3']
    extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
    define_macros = [('NO_ATLAS_INFO', 3), ('HAVE_CBLAS', None)]

Running the minimal parameters for Task 1

Hello,

I am trying to test if the minimal system works for Task1. I am running it on a Windows 10 machine with python 2.7. It does the feature extraction, the feature normalization and I get the following error in the system evaluation.

Traceback (most recent call last):
File "task1.py", line 287, in
sys.exit(main(sys.argv))
File "task1.py", line 236, in main
app.system_evaluation()
File "D:\DCASE2017\dcase_framework\decorators.py", line 38, in function_wrapper
to_return = func(*args, **kwargs)
File "D:\DCASE2017\dcase_framework\application_core.py", line 1477, in system_evaluation
estimated_scene_list=estimated_scene_list)
File "C:\Continuum\Anaconda2\lib\site-packages\sed_eval\scene.py", line 148, in evaluate
y_true.append(reference_item_matched['scene_label'])
TypeError: list indices must be integers, not str

python3.7 + tensorflow1.14.0 + keras2.3.1

anaconda3/envs/tensorflow/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 1470, in count_params return np.prod(x.shape.as_list())
AttributeError: 'TensorVariable' object has no attribute 'as_list'

please,is it my software version is wrong???

Multicore CPU

Hi, I couldn't manage to use all the CPUs. I am trying with Task 1.

I set OpenMP to True and set for many threads as the computer CPUs

Is there anything else to do?

Thanks

'urllib' has no attribute 'URLError'

Hi, I am using python 3.5 and when I run task1.py, I get the error:
'urllib' has no attribute 'URLError'

URLError is included in the urllib.error module, so if I import it as from urllib.error import URLError, it works fine.

In the same way, urlretrieve is in the urllib.request module and I have to import it accordingly to make it work.

I have tried the python versions 2.7 and 3.6 and the same error appears, can it be an error in my installation?

I copy the error message as reference:

JL-MBP:applications JL$ python3 task1.py 
[I] DCASE 2017::Acoustic Scene Classification / Baseline System
[I] 
[I] Initialize [Development setup][folds]
[I] ==================================================
[I] 
[I]   System              
[I]     Name                 : DCASE 2017::Acoustic Scene Classification / Baseline System
[I]     Description          : DCASE2017 baseline (CPU) using DCASE2017 task 1 development dataset
[I]     Parameter set        : dcase2017
[I]     Setup                : Python[3.5.0], Numpy[1.12.1], sklearn[0.18.1], Keras[2.0.2], Theano[0.9.0], Librosa[0.5.0]
[I]   Dataset             
[I]     Name                 : TUT-acoustic-scenes-2017-development
[I]     Active folds         : [1, 2, 3, 4]
[I]   Evaluator           
[I]     Save path            : system/task1/evaluator
Download package list    :   0%|                             Traceback (most recent call last):                                                             | 0/14 [00:00<?, ?it/s]
  File "/Users/JL/Documents/SMC10/Master Thesis/Reference code/DCASE2017-baseline-system/dcase_framework/datasets.py", line 728, in download
    local_filename, headers = urllib.urlretrieve(
AttributeError: module 'urllib' has no attribute 'urlretrieve'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "task1.py", line 287, in <module>
    sys.exit(main(sys.argv))
  File "task1.py", line 208, in main
    app.initialize()
  File "/Users/JL/Documents/SMC10/Master Thesis/Reference code/DCASE2017-baseline-system/dcase_framework/decorators.py", line 38, in function_wrapper
    to_return = func(*args, **kwargs)
  File "/Users/JL/Documents/SMC10/Master Thesis/Reference code/DCASE2017-baseline-system/dcase_framework/application_core.py", line 534, in initialize
    self.dataset.initialize()
  File "/Users/JL/Documents/SMC10/Master Thesis/Reference code/DCASE2017-baseline-system/dcase_framework/datasets.py", line 359, in initialize
    self.download()
  File "/Users/JL/Documents/SMC10/Master Thesis/Reference code/DCASE2017-baseline-system/dcase_framework/datasets.py", line 737, in download
    except (urllib.URLError, socket.timeout) as e:
AttributeError: module 'urllib' has no attribute 'URLError'

evtF1 results in nan when running with tensorflow backend

Hi,
I'm running the code for task2 (keras_seq and task2_cakir) on the GPU using the tensorflow backend. The evtF1 score at some point comes close to zero and then finally results in a nan while the evtER is 1.

I tried to decrease the learning_rate which resulted in postponing the problem to a later epoch, but evtF1 still results in a nan.

Do you know how to tune the hyperparameters when switching from theano to tensorflow?

PS: the DR is also 1, so the model just learns to predict no class resulting in a recall of 0

EDIT: I just figured out, that the CNN model seems to work and only the RNN / CRNN models have the issue described above.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.