nilmtk / nilmtk-contrib Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Thank you for your work. I encountered a problem while debugging the project, specifically in the "pandas" package. I don't know if it is a version problem. I hope to get your help. In addition, the data set "dataport.hdf5" and What is "dred.h5"? I didn't find the corresponding data set. If you have any information to share, I would really appreciate it.
File "G:\Anaconda\envs\ceshi\lib\site-packages\pandas\core\generic.py", line 5273, in getattr
return object.getattribute(self, name)
AttributeError: 'DataFrame' object has no attribute 'ix'
Hi,
Thanks for your awesome work! I have some question about the experiments as following:
How to get a similar result as you post
I run your code and set the same dataset and epochs for every algorithm, but the result is different from yours.
Do you think this is a good result ?
`'appliances': ['washing machine'],
'methods': {
'WindowGRU': WindowGRU({'n_epochs':30}),
'RNN': RNN({'n_epochs':50}),
'DAE': DAE({'n_epochs':50}),
'Seq2Point': Seq2Point({'n_epochs':50}),
'Seq2Seq': Seq2Seq({'n_epochs':50}),
'Convlstm': Convlstm({'n_epochs':30,}),
},
'train': {
'datasets': {
'1': {
'path': 'C:/Users/Jia/Desktop/NILM/ukdale.h5',
'buildings': {
1: {
'start_time': '2014-01-05',
'end_time': '2014-02-05'
},
}
}
}
},
'test': {
'datasets': {
'Datport': {
'path': 'C:/Users/Jia/Desktop/NILM/ukdale.h5',
'buildings': {
1: {
'start_time': '2014-04-1',
'end_time': '2014-04-7'
},
}
}
},
'metrics': ['mae','rmse' ]
}
`
Thanks!
how is it possible to use the API for loading a pre-train model?
Hi! In dae.py, I find that std has been divided twice in the function "normalize_input". Is it a mistake or is it intentional?
Thank you!
Hi,
I was wondering if you are familiar with the following error in Window GRU:
Finished training for WindowGRU
Joint Testing for all algorithmsDropping missing values
Generating predictions for : WindowGRU
Traceback (most recent call last):
File "simple_disag.py", line 407, in <module>
api_results_f1 = API(experiment_f1)
File "/home/chklemen/anaconda3/envs/thesis/lib/python3.6/site-packages/nilmtk/api.py", line 65, in __init__
self.experiment(params)
File "/home/chklemen/anaconda3/envs/thesis/lib/python3.6/site-packages/nilmtk/api.py", line 123, in experiment
self.test_jointly(d)
File "/home/chklemen/anaconda3/envs/thesis/lib/python3.6/site-packages/nilmtk/api.py", line 374, in test_jointly
self.call_predict(self.classifiers)
File "/home/chklemen/anaconda3/envs/thesis/lib/python3.6/site-packages/nilmtk/api.py", line 420, in call_predict
'Europe/London')
File "/home/chklemen/anaconda3/envs/thesis/lib/python3.6/site-packages/nilmtk/api.py", line 468, in predict
pred_list = clf.disaggregate_chunk(test_elec)
File "/home/chklemen/anaconda3/envs/thesis/lib/python3.6/site-packages/nilmtk_contrib/disaggregate/WindowGRU.py", line 147, in disaggregate_chunk
mains = mains.values.reshape((-1,self.sequence_length,1))
AttributeError: 'numpy.ndarray' object has no attribute 'values'
Closing remaining open files:../../data/SynD.h5...done../../data/SynD.h5...done
Sometimes I get this error for test chunks that work fine for RNN or DAE.
Any idea?
best,
C
Hi there,
this is not an issue but rather something I was wondering about.
While running AFHMM and AFHMM + SAC in my experiments, I recognized that those approaches really occupy a lot of CPU power. Even our simulation servers struggle with those algorithms sometimes.
Have you thought about providing a single-thread implementation of disaggregate_chunk ? Is it possible to limit the number of created threads in an easy manner?
Feel free to ignore this :)
Best,
C
Hi,
I've been trying out some of the neural nets from the API documentation and I get the same error no matter which dataset I am using or what example I use. Currently tearing my hair out!
I have converted the REDD dataset to HDF5 using the documentation and then tried to feed it into the API as below. I've also pasted the error below.
from nilmtk_contrib.disaggregate import DAE,Seq2Point, Seq2Seq, RNN, WindowGRU
redd = {
'power': {
'mains': ['apparent','active'],
'appliance': ['apparent','active']
},
'sample_rate': 60,
'appliances': ['fridge'],
'methods': {
'WindowGRU':WindowGRU({'n_epochs':50,'batch_size':32}),
'RNN':RNN({'n_epochs':50,'batch_size':32}),
'DAE':DAE({'n_epochs':50,'batch_size':32}),
'Seq2Point':Seq2Point({'n_epochs':50,'batch_size':32}),
'Seq2Seq':Seq2Seq({'n_epochs':50,'batch_size':32}),
'Mean': Mean({}),
},
'train': {
'datasets': {
'REDD': {
'path': '/home/rpolea/redd_test.h5',
'buildings': {
1: {
'start_time': '2011-04-18',
'end_time': '2011-04-28'
},
}
}
}
},
'test': {
'datasets': {
'REDD': {
'path': '/home/rpolea/redd_test.h5',
'buildings': {
1: {
'start_time': '2011-05-01',
'end_time': '2011-05-03'
},
}
}
},
'metrics':['mae']
}
}
Started training for WindowGRU
Joint training for WindowGRU
............... Loading Data for training ...................
Loading data for REDD dataset
Loading building ... 1
Loading data for meter ElecMeterID(instance=2, building=1, dataset='REDD')
Done loading data all meters for this chunk.
Dropping missing values
Training processing
First model training for fridge
Epoch 1/50
358/358 [==============================] - ETA: 0s - loss: 0.0116
ValueError Traceback (most recent call last)
in
----> 1 API(redd)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in init(self, params)
44 self.DROP_ALL_NANS = params.get("DROP_ALL_NANS", True)
45 self.site_only = params.get('site_only',False)
---> 46 self.experiment()
47
48
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in experiment(self)
89 else:
90 print ("Joint training for ",clf.MODEL_NAME)
---> 91 self.train_jointly(clf,d)
92
93 print ("Finished training for ",clf.MODEL_NAME)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in train_jointly(self, clf, d)
238 self.train_submeters = appliance_readings
239
--> 240 clf.partial_fit(self.train_mains,self.train_submeters)
241
242
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk_contrib/disaggregate/WindowGRU.py in partial_fit(self, train_main, train_appliances, do_preprocessing, **load_kwargs)
70 checkpoint = ModelCheckpoint(filepath,monitor='val_loss',verbose=1,save_best_only=True,mode='min')
71 train_x, v_x, train_y, v_y = train_test_split(mains, app_reading, test_size=.15,random_state=10)
---> 72 model.fit(train_x,train_y,validation_data=[v_x,v_y],epochs=self.n_epochs,callbacks=[checkpoint],shuffle=True,batch_size=self.batch_size)
73 model.load_weights(filepath)
74
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1139 workers=workers,
1140 use_multiprocessing=use_multiprocessing,
-> 1141 return_dict=True)
1142 val_logs = {'val_' + name: val for name, val in val_logs.items()}
1143 epoch_logs.update(val_logs)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, return_dict)
1387 with trace.Trace('test', step_num=step, _r=1):
1388 callbacks.on_test_batch_begin(step)
-> 1389 tmp_logs = self.test_function(iterator)
1390 if data_handler.should_sync:
1391 context.async_wait()
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in call(self, *args, **kwds)
826 tracing_count = self.experimental_get_tracing_count()
827 with trace.Trace(self._name) as tm:
--> 828 result = self._call(*args, **kwds)
829 compiler = "xla" if self._experimental_compile else "nonXla"
830 new_tracing_count = self.experimental_get_tracing_count()
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
869 # This is the first call of call, so we have to initialize.
870 initializers = []
--> 871 self._initialize(args, kwds, add_initializers_to=initializers)
872 finally:
873 # At this point we know that the initialization is complete (or less
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
724 self._concrete_stateful_fn = (
725 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 726 *args, **kwds))
727
728 def invalid_creator_scope(*unused_args, **unused_kwds):
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2967 args, kwargs = None, None
2968 with self._lock:
-> 2969 graph_function, _ = self._maybe_define_function(args, kwargs)
2970 return graph_function
2971
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3359
3360 self._function_cache.missed.add(call_context_key)
-> 3361 graph_function = self._create_graph_function(args, kwargs)
3362 self._function_cache.primary[cache_key] = graph_function
3363
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3204 arg_names=arg_names,
3205 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3206 capture_by_value=self._capture_by_value),
3207 self._function_attributes,
3208 function_spec=self.function_spec,
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
988 _, original_func = tf_decorator.unwrap(python_func)
989
--> 990 func_outputs = python_func(*func_args, **func_kwargs)
991
992 # invariant: func_outputs
contains only Tensors, CompositeTensors,
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
632 xla_context.Exit()
633 else:
--> 634 out = weak_wrapped_fn().wrapped(*args, **kwds)
635 return out
636
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
975 except Exception as e: # pylint:disable=broad-except
976 if hasattr(e, "ag_error_metadata"):
--> 977 raise e.ag_error_metadata.to_exception(e)
978 else:
979 raise
ValueError: in user code:
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1233 test_function *
return step_function(self, iterator)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1224 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1217 run_step **
outputs = model.test_step(data)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1183 test_step
y_pred = self(x, training=False)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:998 __call__
input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py:207 assert_input_compatibility
' input tensors. Inputs received: ' + str(inputs))
ValueError: Layer sequential_11 expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 99, 1) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None, 1) dtype=float32>]
In the rnn model input_shape=(input_window_width, 1)and output isDense(1, activation='linear')),
that means input a sequence output a point?
Can anyone upload the environment.yml or the versions of keras, tensorflow, nilmtk, nilmtk-contrib as bert requires keras.layers.multi_head_attention and it does not work properly with the versions of keras used after conda installing nilmtk and nilmtk-contrib. upgrading keras and tensorflow causes conflicts after which nilmtk cannot be used.
The padding in the window-gru is added at the end of the mains instead of the beginning. Is this the supposed behavior?
Has anyone encountered the following error?
2022-03-03 23:42:05.100902: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Using TensorFlow backend.
2022-03-03 23:42:08.117155: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2022-03-03 23:42:08.130097: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
2022-03-03 23:42:08.133384: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-NU98JL3
2022-03-03 23:42:08.133667: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-NU98JL3
Traceback (most recent call last):
File "D:/代码/nilmtk-contrib-master/sample_notebooks/NILMTK API.py", line 5, in
from nilmtk_contrib.disaggregate import DAE,Seq2Point, Seq2Seq, RNN, WindowGRU,bert
File "D:\代码\nilmtk-contrib-master\nilmtk_contrib_init_.py", line 1, in
from . import disaggregate
File "D:\代码\nilmtk-contrib-master\nilmtk_contrib\disaggregate_init_.py", line 14, in
from .bert import BERT
File "D:\代码\nilmtk-contrib-master\nilmtk_contrib\disaggregate\bert.py", line 15, in
from keras.layers import Layer,MultiHeadAttention,LayerNormalization,Embedding
ImportError: cannot import name 'MultiHeadAttention' from 'keras.layers' (D:\ANACONDA3\envs\nilm\lib\site-packages\keras\layers_init_.py)
Closing remaining open files:C:\Users\DJY\AppData\Local\Temp\nilmtk-i7g167np.h5...done
Hello everyone,
I have some question about "call_preprocessing" in all deep learning methods.
call_preprocessing
of methods to process the data, but why fill in the mains
data and appliance
data? Could you give me the paper that uses this method to process data?new_mains = np.pad(new_mains,(units_to_pad,units_to_pad),'constant',constant_values=(0,0))
new_mains = np.array([new_mains[i:i + n] for i in range(len(new_mains) - n + 1)])
new_app_readings = np.pad(new_app_readings,(units_to_pad,units_to_pad),'constant',constant_values = (0,0))
new_app_readings = np.array([new_app_readings[i:i + n] for i in range(len(new_app_readings) - n + 1)])
What is the reason for setting value of sequence_length
,mains_mean
and mains_std
? Why set sequence_length
to 19 in RNN
and sequence_length
to 99 in Seq2Seq and Seq2Point
?
Why set mains_mean
to 1800 and mains_std
to 600 in all methods?
Could you help me? Thank you very much!
Best wishes!
Hi,
Anyone succuessfully run "Using the API with NILMTK-CONTRIB" recently? I used
conda create -n nilm -c conda-forge -c nilmtk nilmtk-contrib
conda install cudatoolkit=11.0 cudnn
pip install tensorflow-gpu==2.4.0
to install the virtual evn. Then, when I tried to run the code, I got the warning:
Started training for WindowGRU
Joint training for WindowGRU
............... Loading Data for training ...................
Loading data for Dataport dataset
Loading building ... 1
Loading data for meter ElecMeterID(instance=2, building=1, dataset='REDD')
Done loading data all meters for this chunk.
Dropping missing values
Training processing
First model training for fridge
WARNING:tensorflow:Layer gru will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
Train on 10206 samples, validate on 1802 samples
Epoch 1/10
10206/10206 [==============================] - ETA: 0s - loss: 0.0099
Epoch 00001: val_loss improved from inf to 0.00629, saving model to windowgru-temp-weights-74894.h5
10206/10206 [==============================] - 78s 8ms/sample - loss: 0.0099 - val_loss: 0.0063
Epoch 2/10
10206/10206 [==============================] - ETA: 0s - loss: 0.0065
......
Any idea how to solve the issue? It's slower than CPU (takes aroud 45s/epoch).
My config:
RTX3070
CUDA: 11.0
cudnn: 8.1
tensorflow-gpu: 2.4.0
Hello everyone,
Thanks for your contribution! But I have a question about the metrics.
How many metrics can we use now? I just find mae
and rmse
, I didn't find F1 score
and accuracy
and others.
Thank you very much!
Since keras-2.4 and tensorflow-2.3, all import keras
statements must be replaced by import tensorflow.keras
as stated on SO. This is due to an internal change in the library that break earlier installations. See also here.
Two solution here:
keras>=2.2.4,<2.4
and tensorflow>=2.0,<2.3
in the setup.py and conda requirements. However, this only increases the technical debt.I can handle the PR and go for solution 2., but I would like your opinion first @PMeira :).
Libraries:
python-3.8.5
tensorflow-gpu-2.3.0
keras-2.4.2
nilmtk-0.4.2
nilmtk-contrib at origin/master
Code:
import nilmtk.api
import nilmtk.disaggregate
import nilmtk_contrib
test_GPU = {
"power": { "mains": [ "active" ], "appliance": [ "active" ] },
"sample_rate": 10,
"appliances": [ "kettle", ],
"artificial_aggregate": False,
"chunk_size": 2**16,
"DROP_ALL_NANS": True,
"methods": {
"Seq2Point": nilmtk_contrib.disaggregate.Seq2Point({
"batch_size": 1024,
"chunk_wise_training": True,
}),
},
"train": {
"datasets": {
"UK-DALE": {
"path": "datasets/UK-DALE/ukdale2017.h5",
"buildings": {
1: { "start_time": "2013-03-18", "end_time": "2014-12-01" },
2: { "start_time": "2013-02-17", "end_time": "2013-08-05" },
},
},
},
},
"test": {
"datasets": {
"UK-DALE": {
"path": "datasets/UK-DALE/ukdale2017.h5",
"buildings": {
5: { "start_time": "2014-06-29", "end_time": "2014-11-13" },
},
},
},
"metrics": [ "f1_score", "mae", ],
},
}
if __name__ == "__main__":
res_test_GPU = nilmtk.api.API(test_GPU)
Traceback:
2020-09-08 10:14:23.070885: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Using TensorFlow backend.
Started training for Seq2Point
Chunk wise training for Seq2Point
Loading data for UK-DALE dataset
Loading building ... 1
ElecMeter(instance=54, building=1, dataset='UK-DALE', site_meter, appliances=[Appliance(type='immersion heater', instance=1), Appliance(type='water pump', instance=1), Appliance(type='security alarm', instance=1), Appliance(type='fan', instance=2), Appliance(type='drill', instance=1), Appliance(type='laptop computer', instance=2)])
Starting enumeration..........
Dropping missing values
{'kettle': {'mean': 15.483873, 'std': 182.19482}}
...............Seq2Point partial_fit running...............
First model training for kettle
2020-09-08 10:15:06.392509: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-08 10:15:08.067095: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.067391: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5
coreClock: 1.56GHz coreCount: 16 deviceMemorySize: 3.82GiB deviceMemoryBandwidth: 119.24GiB/s
2020-09-08 10:15:08.067461: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-09-08 10:15:08.068738: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-08 10:15:08.069875: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-08 10:15:08.070148: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-08 10:15:08.071229: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-08 10:15:08.071778: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-08 10:15:08.073978: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-09-08 10:15:08.074065: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.074317: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.074506: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-09-08 10:15:08.074685: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-09-08 10:15:08.092661: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2601325000 Hz
2020-09-08 10:15:08.092880: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d467d64150 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-08 10:15:08.092893: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-09-08 10:15:08.162948: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.163251: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d467df0170 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-08 10:15:08.163265: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1650, Compute Capability 7.5
2020-09-08 10:15:08.163449: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.163697: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5
coreClock: 1.56GHz coreCount: 16 deviceMemorySize: 3.82GiB deviceMemoryBandwidth: 119.24GiB/s
2020-09-08 10:15:08.163733: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-09-08 10:15:08.163774: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-08 10:15:08.163790: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-08 10:15:08.163821: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-08 10:15:08.163835: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-08 10:15:08.163870: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-08 10:15:08.163886: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-09-08 10:15:08.164028: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.164291: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.164518: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-09-08 10:15:08.164545: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-09-08 10:15:08.529420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-08 10:15:08.529470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-09-08 10:15:08.529476: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-09-08 10:15:08.529708: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.530058: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-08 10:15:08.530333: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3401 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:01:00.0, compute capability: 7.5)
Traceback (most recent call last):
File "2020-08-rectangular_regression/benchmark.py", line 111, in <module>
res_test_GPU = nilmtk.api.API(test_GPU)
File ".venv/lib/python3.8/site-packages/nilmtk-0.4.0.dev1+git.236b169-py3.8.egg/nilmtk/api.py", line 45, in __init__
self.experiment()
File ".venv/lib/python3.8/site-packages/nilmtk-0.4.0.dev1+git.236b169-py3.8.egg/nilmtk/api.py", line 80, in experiment
self.train_chunk_wise(clf,d)
File ".venv/lib/python3.8/site-packages/nilmtk-0.4.0.dev1+git.236b169-py3.8.egg/nilmtk/api.py", line 152, in train_chunk_wise
clf.partial_fit(self.train_mains,self.train_submeters)
File ".venv/lib/python3.8/site-packages/nilmtk_contrib-0.1.2.dev1+git.de38dab-py3.8.egg/nilmtk_contrib/disaggregate/seq2point.py", line 88, in partial_fit
File ".venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File ".venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1067, in fit
callbacks = callbacks_module.CallbackList(
File ".venv/lib/python3.8/site-packages/tensorflow/python/keras/callbacks.py", line 234, in __init__
self._should_call_train_batch_hooks = any(
File ".venv/lib/python3.8/site-packages/tensorflow/python/keras/callbacks.py", line 235, in <genexpr>
cb._implements_train_batch_hooks() for cb in self.callbacks)
AttributeError: 'ModelCheckpoint' object has no attribute '_implements_train_batch_hooks'
Closing remaining open files:datasets/UK-DALE/ukdale2017.h5...done
Each call of partial_fit
for AFHMM and AFHMM+SAC is overriding the results of the previous calls. Thus, a chunkwise training of those algorithms does not work as intended. The current implementation trains only on the last chunk.
The model parameters for AFHMM and AFHMM+SAC are self.means_vector
, self.pi_s_vector
and self.transmat_vector
. However, these attributes are assigned again and again without considering their previous values at each call of partial_fit
.
I can not install nilmtk-contrib package. Installed libraries and listing below. What could be a problem?
System - win10 64-bit
conda 4.11.0
nilmtk 0.4.3 py_0 nilmtk
scikit-learn 1.0.2 py38hb60ee80_0 conda-forge
keras 2.4.3 pyhd8ed1ab_0 conda-forge
keras-applications 1.0.8 py_1 conda-forge
keras-preprocessing 1.1.2 pyhd8ed1ab_0 conda-forge
cvxpy 1.1.18 py38haa244fe_0 conda-forge
cvxpy-base 1.1.18 py38h5d928e2_0 conda-forge
(nilmtk-env) C:\Users\710_004733>conda install -c conda-forge -c nilmtk nilmtk-contrib
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment:
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.|
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
(nilmtk-env) C:\Users\710_004733>
Hello,
I'm new user of nilmtk, now i try to use seq2point neural network. I try to train (across home) the model of seq2point with the data of building 1 and building 2 of dataset ukdale. But the value of loss did not improve from INF. I try with redd dataset too, i got aways the same problem.
I'm not a expert on neural network, would it be possible to let me know how to solve this problem ?
experiment = {
'power': {'mains': ['apparent','active'],'appliance': ['apparent','active']},
'sample_rate': 1800,
'appliances': ['fridge', 'dish washer','kettle','washer dryer'],
'DROP_ALL_NANS': True,
'methods': {"Seq2Point":Seq2Point({'n_epochs':50,'batch_size':1024})},
'train': {
'datasets': {
'UKDALE': {
'path': 'C:\\Users\\HaichengLing\\Jupyter\\ukdale.h5',
'buildings': {
1: {
'start_time': '2014-08-05',
'end_time': '2014-12-30'
},
2: {
'start_time': '2013-05-25',
'end_time': '2013-07-15'
},
}
},
}
},
'test': {
'datasets': {
'UKDALE': {
'path': 'C:\\Users\\HaichengLing\\Jupyter\\ukdale.h5',
'buildings': {
1: {
'start_time': '2014-12-30',
'end_time': '2015-01-03'
}
}
}
},
'metrics':['mae', 'rmse']
}
}
Train on 11998 samples, validate on 2118 samples
Epoch 1/50
11998/11998 [==============================] - 5s 389us/step - loss: nan - val_loss: nan
Epoch 00001: val_loss did not improve from inf
Epoch 2/50
11998/11998 [==============================] - 6s 464us/step - loss: nan - val_loss: nan
Epoch 00002: val_loss did not improve from inf
Epoch 3/50
11998/11998 [==============================] - 6s 521us/step - loss: nan - val_loss: nan
Epoch 00003: val_loss did not improve from inf
Epoch 4/50
11998/11998 [==============================] - 6s 524us/step - loss: nan - val_loss: nan
Epoch 00004: val_loss did not improve from inf
Epoch 5/50
11998/11998 [==============================] - 6s 474us/step - loss: nan - val_loss: nan
Epoch 00005: val_loss did not improve from inf
Epoch 6/50
11998/11998 [==============================] - 6s 485us/step - loss: nan - val_loss: nan
Epoch 00006: val_loss did not improve from inf
Epoch 7/50
11998/11998 [==============================] - 6s 490us/step - loss: nan - val_loss: nan
Epoch 00007: val_loss did not improve from inf
Epoch 8/50
11998/11998 [==============================] - 6s 486us/step - loss: nan - val_loss: nan
nilmtk-contrib/disaggregate/mean.py
Line 13 in 0bc43ef
Also,
import pickle
from nilmtk.datastore import HDFDataStore
from nilmtk.utils import find_nearest
self.state_combinations = None
self.MIN_CHUNK_LENGTH = 100
num_on_states=None
if len(train_appliances)>12:
max_num_clusters=2
else:
max_num_clusters=3
appliance_in_model=[d['training_metadata'] for d in self.model
def _set_state_combinations_if_necessary(self):
"""Get centroids"""
# If we import sklearn at the top of the file then auto doc fails.
if (self.state_combinations is None or
self.state_combinations.shape[1] != len(self.model)):
from sklearn.utils.extmath import cartesian
centroids = [model['states'] for model in self.model]
self.state_combinations = cartesian(centroids)
def disaggregate_chunk(self, mains):
if len(mains) < self.MIN_CHUNK_LENGTH:
raise RuntimeError("Chunk is too short.")
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
self._set_state_combinations_if_necessary()
state_combinations = self.state_combinations
summed_power_of_each_combination = np.sum(state_combinations, axis=1)
# Start disaggregation
indices_of_state_combinations, residual_power = find_nearest(
summed_power_of_each_combination, mains.values)
appliance_powers_dict = {}
for i, model in enumerate(self.model):
print("Estimating power demand for '{}'"
.format(model['training_metadata']))
predicted_power = state_combinations[
indices_of_state_combinations, i].flatten()
column = pd.Series(predicted_power, index=mains.index, name=i)
appliance_powers_dict[self.model[i]['training_metadata']] = column
appliance_powers = pd.DataFrame(appliance_powers_dict, dtype='float32')
return appliance_powers
In the AFHMM_SAC
class __init__
function, the parameter chunk_wise_training
is hardcoded.
Consequence: the AFHMM+SAC will be trained jointly, no matter what the user specified. Handy, when the training data does not fit in RAM :).
This issue is just for referencing, I will open a PR.
Hi,
I am wondering if it is possible to train on several different datasets and then test using the API? I've played around but keep getting errors so unsure whether this is something that isn't possible using just the API.
I've provided an example below of some of the code I've tried running.
d = {
'power': {
'mains': ['apparent','active'],
'appliance': ['apparent','active']
},
'sample_rate': 60,
'appliances': ['washing machine','fridge'],
'methods': {
"Mean": Mean({}),"CO":CO({}),'Hart85':Hart85({}),"FHMMExact":FHMMExact({}),
"RNN":RNN({'n_epochs':50,'batch_size':1024}) }, "Seq2Point":Seq2Point({'n_epochs':50,'batch_size':1024}), "Seq2Seq":Seq2Seq({'n_epochs':50,'batch_size':1024}),"DAE":DAE({'n_epochs':50,'batch_size':1024}),"WindowGRU":WindowGRU({'n_epochs':30,'batch_size':1024}),
'train': {
'datasets': {
'UKDALE': {
'path': ukdale,
'buildings': {
1: {
'start_time': '2017-03-01',
'end_time': '2017-03-05'
},
}
},
'REDD': {
'path': redd,
'buildings': {
1: {
'start_time': '2011-04-17',
'end_time': '2011-04-27'
}
}
}
}
},
'test': {
'datasets': {
'DRED': {
'path': dred,
'buildings': {
1: {
'start_time': '2015-04-18',
'end_time': '2015-04-19'
}
}
},
'REDD': {
'path': redd,
'buildings': {
1: {
'start_time': '2011-04-17',
'end_time': '2011-04-27'
}
}
}
},
'metrics': ['mae']
}
}
I've attempted the code above as an example and recieved an error
IndexError Traceback (most recent call last)
<ipython-input-19-5310df2ef50c> in <module>
----> 1 API(d)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in __init__(self, params)
44 self.DROP_ALL_NANS = params.get("DROP_ALL_NANS", True)
45 self.site_only = params.get('site_only',False)
---> 46 self.experiment()
47
48
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in experiment(self)
103 else:
104 print ("Joint Testing for all algorithms")
--> 105 self.test_jointly(d)
106
107 def train_chunk_wise(self, clf, d, current_epoch):
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in test_jointly(self, d)
272 self.test_mains = [test_mains]
273 self.storing_key = str(dataset) + "_" + str(building)
--> 274 self.call_predict(self.classifiers, test.metadata["timezone"])
275
276
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in call_predict(self, classifiers, timezone)
321 gt_overall={}
322 for name,clf in classifiers:
--> 323 gt_overall,pred_overall[name]=self.predict(clf,self.test_mains,self.test_submeters, self.sample_period, timezone)
324
325 self.gt_overall=gt_overall
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in predict(self, clf, test_elec, test_submeters, sample_period, timezone)
369
370
--> 371 pred_list = clf.disaggregate_chunk(test_elec)
372
373 # It might not have time stamps sometimes due to neural nets
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/disaggregate/hart_85.py in disaggregate_chunk(self, test_mains)
408 [_, transients] = find_steady_states(
409 test_mains[0], state_threshold=self.state_threshold,
--> 410 noise_level=self.noise_level)
411 #print('Transients:',transients)
412 # For now ignoring the first transient
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/nilmtk/feature_detectors/steady_states.py in find_steady_states(dataframe, min_n_samples, state_threshold, noise_level)
69 steady_states = [] # steadyStates to store in returned Dataframe
70 N = 0 # N stores the number of samples in state
---> 71 time = dataframe.iloc[0].name # first state starts at beginning
72
73 # Iterate over the rows performing algorithm
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/pandas/core/indexing.py in __getitem__(self, key)
1422
1423 maybe_callable = com.apply_if_callable(key, self.obj)
-> 1424 return self._getitem_axis(maybe_callable, axis=axis)
1425
1426 def _is_scalar_access(self, key: Tuple):
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/pandas/core/indexing.py in _getitem_axis(self, key, axis)
2155
2156 # validate the location
-> 2157 self._validate_integer(key, axis)
2158
2159 return self._get_loc(key, axis=axis)
/usr/local/miniconda/envs/nilm/lib/python3.7/site-packages/pandas/core/indexing.py in _validate_integer(self, key, axis)
2086 len_axis = len(self.obj._get_axis(axis))
2087 if key >= len_axis or key < -len_axis:
-> 2088 raise IndexError("single positional indexer is out-of-bounds")
2089
2090 def _getitem_tuple(self, tup):
IndexError: single positional indexer is out-of-bounds
RuntimeError Traceback (most recent call last)
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\IPython\core\formatters.py in call(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\IPython\core\pylabtools.py in (fig)
246
247 if 'png' in formats:
--> 248 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
249 if 'retina' in formats or 'png2x' in formats:
250 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\IPython\core\pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
130 FigureCanvasBase(fig)
131
--> 132 fig.canvas.print_figure(bytes_io, **kw)
133 data = bytes_io.getvalue()
134 if fmt == 'svg':
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, **kwargs)
2063 orientation=orientation,
2064 dryrun=True,
-> 2065 **kwargs)
2066 renderer = self.figure._cachedRenderer
2067 bbox_artists = kwargs.pop("bbox_extra_artists", None)
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backends\backend_agg.py in print_png(self, filename_or_obj, metadata, pil_kwargs, *args, **kwargs)
525
526 else:
--> 527 FigureCanvasAgg.draw(self)
528 renderer = self.get_renderer()
529 with cbook._setattr_cm(renderer, dpi=self.figure.dpi), \
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
386 self.renderer = self.get_renderer(cleared=True)
387 with RendererAgg.lock:
--> 388 self.figure.draw(self.renderer)
389 # A GUI class may be need to update a window using this draw, so
390 # don't forget to call the superclass.
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1707 self.patch.draw(renderer)
1708 mimage._draw_list_compositing_images(
-> 1709 renderer, self, artists, self.suppressComposite)
1710
1711 renderer.close_group('figure')
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
133 if not_composite or not has_images:
134 for a in artists:
--> 135 a.draw(renderer)
136 else:
137 # Composite any adjacent images together
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\axes_base.py in draw(self, renderer, inframe)
2605 artists.remove(spine)
2606
-> 2607 self._update_title_position(renderer)
2608
2609 if not self.axison or inframe:
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\axes_base.py in _update_title_position(self, renderer)
2554 # this happens for an empty bb
2555 y = 1
-> 2556 if title.get_window_extent(renderer).ymin < top:
2557 y = self.transAxes.inverted().transform(
2558 (0., top))[1]
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\text.py in get_window_extent(self, renderer, dpi)
888 raise RuntimeError('Cannot get window extent w/o renderer')
889
--> 890 bbox, info, descent = self._get_layout(self._renderer)
891 x, y = self.get_unitless_position()
892 x, y = self.get_transform().transform_point((x, y))
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\text.py in _get_layout(self, renderer)
289 _, lp_h, lp_d = renderer.get_text_width_height_descent(
290 "lp", self._fontproperties,
--> 291 ismath="TeX" if self.get_usetex() else False)
292 min_dy = (lp_h - lp_d) * self._linespacing
293
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backends\backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
208
209 flags = get_hinting_flag()
--> 210 font = self._get_agg_font(prop)
211 font.set_text(s, 0.0, flags=flags)
212 w, h = font.get_width_height() # width and height of unrotated string
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backends\backend_agg.py in _get_agg_font(self, prop)
244 """
245 fname = findfont(prop)
--> 246 font = get_font(fname)
247
248 font.clear()
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\font_manager.py in get_font(filename, hinting_factor)
1339 if hinting_factor is None:
1340 hinting_factor = rcParams['text.hinting_factor']
-> 1341 return _get_font(filename, hinting_factor)
1342
1343
RuntimeError: In FT2Font: Can not load face.
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\IPython\core\formatters.py in __call__(self, obj) 339 pass 340 else: --> 341 return printer(obj) 342 # Finally look for special method names 343 method = get_real_method(obj, self.print_method)C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\IPython\core\pylabtools.py in (fig)
246
247 if 'png' in formats:
--> 248 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
249 if 'retina' in formats or 'png2x' in formats:
250 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\IPython\core\pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
130 FigureCanvasBase(fig)
131
--> 132 fig.canvas.print_figure(bytes_io, **kw)
133 data = bytes_io.getvalue()
134 if fmt == 'svg':
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, **kwargs)
2063 orientation=orientation,
2064 dryrun=True,
-> 2065 **kwargs)
2066 renderer = self.figure._cachedRenderer
2067 bbox_artists = kwargs.pop("bbox_extra_artists", None)
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backends\backend_agg.py in print_png(self, filename_or_obj, metadata, pil_kwargs, *args, **kwargs)
525
526 else:
--> 527 FigureCanvasAgg.draw(self)
528 renderer = self.get_renderer()
529 with cbook._setattr_cm(renderer, dpi=self.figure.dpi), \
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
386 self.renderer = self.get_renderer(cleared=True)
387 with RendererAgg.lock:
--> 388 self.figure.draw(self.renderer)
389 # A GUI class may be need to update a window using this draw, so
390 # don't forget to call the superclass.
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1707 self.patch.draw(renderer)
1708 mimage._draw_list_compositing_images(
-> 1709 renderer, self, artists, self.suppressComposite)
1710
1711 renderer.close_group('figure')
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
133 if not_composite or not has_images:
134 for a in artists:
--> 135 a.draw(renderer)
136 else:
137 # Composite any adjacent images together
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\axes_base.py in draw(self, renderer, inframe)
2605 artists.remove(spine)
2606
-> 2607 self._update_title_position(renderer)
2608
2609 if not self.axison or inframe:
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\axes_base.py in _update_title_position(self, renderer)
2554 # this happens for an empty bb
2555 y = 1
-> 2556 if title.get_window_extent(renderer).ymin < top:
2557 y = self.transAxes.inverted().transform(
2558 (0., top))[1]
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\text.py in get_window_extent(self, renderer, dpi)
888 raise RuntimeError('Cannot get window extent w/o renderer')
889
--> 890 bbox, info, descent = self._get_layout(self._renderer)
891 x, y = self.get_unitless_position()
892 x, y = self.get_transform().transform_point((x, y))
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\text.py in _get_layout(self, renderer)
289 _, lp_h, lp_d = renderer.get_text_width_height_descent(
290 "lp", self._fontproperties,
--> 291 ismath="TeX" if self.get_usetex() else False)
292 min_dy = (lp_h - lp_d) * self._linespacing
293
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backends\backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
208
209 flags = get_hinting_flag()
--> 210 font = self._get_agg_font(prop)
211 font.set_text(s, 0.0, flags=flags)
212 w, h = font.get_width_height() # width and height of unrotated string
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\backends\backend_agg.py in _get_agg_font(self, prop)
244 """
245 fname = findfont(prop)
--> 246 font = get_font(fname)
247
248 font.clear()
C:\ProgramData\Anaconda3\envs\t1\lib\site-packages\matplotlib\font_manager.py in get_font(filename, hinting_factor)
1339 if hinting_factor is None:
1340 hinting_factor = rcParams['text.hinting_factor']
-> 1341 return _get_font(filename, hinting_factor)
1342
1343
RuntimeError: In FT2Font: Can not load face.
Is it possible to save and subsequently load pretrained neural network models as is the case for the mean algorithm?
Hello everyone,
Thanks for your contribution!
I can use the DAE, Seq2Point, Seq2Seq, RNN, WindowGPU
, but when I change them to AFHMM and AFHMM_SAC
get the error:
`Started training for AFHMM
Joint training for AFHMM
............... Loading Data for training ...................
Loading data for Dataport dataset
Loading building ... 1
Done loading data all meters for this chunk.
Dropping missing values
Train Jointly
(2318, 1) (2318, 1) MultiIndex([('power', 'apparent')],
names=['physical_quantity', 'type']) MultiIndex([('power', 'active')],
names=['physical_quantity', 'type'])
Finished Training
Finished training for AFHMM
Started training for AFHMM
Joint training for AFHMM
............... Loading Data for training ...................
Loading data for Dataport dataset
Loading building ... 1
Done loading data all meters for this chunk.
Dropping missing values
Train Jointly
(2318, 1) (2318, 1) MultiIndex([('power', 'apparent')],
names=['physical_quantity', 'type']) MultiIndex([('power', 'active')],
names=['physical_quantity', 'type'])
Finished Training
Finished training for AFHMM
Started training for Mean
Joint training for Mean
............... Loading Data for training ...................
Loading data for Dataport dataset
Loading building ... 1
Done loading data all meters for this chunk.
Dropping missing values
Train Jointly
(2318, 1) (2318, 1) MultiIndex([('power', 'apparent')],
names=['physical_quantity', 'type']) MultiIndex([('power', 'active')],
names=['physical_quantity', 'type'])
Finished training for Mean
Joint Testing for all algorithms
Loading data for Datport dataset
Loading data for meter ElecMeterID(instance=2, building=1, dataset='REDD')
Done loading data all meters for this chunk.
Dropping missing values
Generating predictions for : AFHMM
Traceback (most recent call last):
File "", line 54, in
api_res = API(redd)
File "F:\Anaconda3\envs\nilmtk\lib\site-packages\nilmtk\api.py", line 59, in init
self.experiment(params)
File "F:\Anaconda3\envs\nilmtk\lib\site-packages\nilmtk\api.py", line 118, in experiment
self.test_jointly(d)
File "F:\Anaconda3\envs\nilmtk\lib\site-packages\nilmtk\api.py", line 292, in test_jointly
self.call_predict(self.classifiers)
File "F:\Anaconda3\envs\nilmtk\lib\site-packages\nilmtk\api.py", line 339, in call_predict
gt_overall,pred_overall[name]=self.predict(clf,self.test_mains,self.test_submeters, self.sample_period,'Europe/London')
File "F:\Anaconda3\envs\nilmtk\lib\site-packages\nilmtk\api.py", line 386, in predict
pred_list = clf.disaggregate_chunk(test_elec)
File "F:\Anaconda3\envs\nilmtk\lib\site-packages\nilmtk_contrib\disaggregate\afhmm.py", line 226, in disaggregate_chunk
self.arr_of_results.append(d[i])
File "", line 2, in getitem
File "F:\Anaconda3\envs\nilmtk\lib\multiprocessing\managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
KeyError: 0`
Could you help me?
Hi,
not an issue, just a question. I recognized that the pre-set sequence length for RNN is 19, whereas it's 99 for all others. Is that on purpose? If so, why?
class RNN(Disaggregator):
def __init__(self, params):
"""
Parameters to be specified for the model
"""
self.MODEL_NAME = "RNN"
self.models = OrderedDict()
self.chunk_wise_training = params.get('chunk_wise_training',False)
self.sequence_length = params.get('sequence_length',19)
RNN works fine for me, I'm just curious why 19 is set as default.
best,
C
I have converted the Refit dataset with the convert_refit function resulting in a refit.h5 file.
Now I have a problem when setting up an experiment with the refit.h5 file.
error code:
Traceback (most recent call last):
File "C:/Users/mime02/PycharmProjects/EnergyPredictionLSTM/Evaluation/NILMTK/NILMTK_con.py", line 168, in
api_res = API(refit)
File "C:\Users\mime02\Anaconda3\envs\nilm\lib\site-packages\nilmtk\api.py", line 46, in init
self.experiment()
File "C:\Users\mime02\Anaconda3\envs\nilm\lib\site-packages\nilmtk\api.py", line 105, in experiment
self.test_jointly(d)
File "C:\Users\mime02\Anaconda3\envs\nilm\lib\site-packages\nilmtk\api.py", line 250, in test_jointly
test_mains=next(test.buildings[building].elec.mains().load(physical_quantity='power', ac_type='apparent', sample_period=self.sample_period))
File "C:\Users\mime02\Anaconda3\envs\nilm\lib\site-packages\nilmtk\elecmeter.py", line 451, in load
last_node = self.get_source_node(**kwargs)
File "C:\Users\mime02\Anaconda3\envs\nilm\lib\site-packages\nilmtk\elecmeter.py", line 576, in get_source_node
loader_kwargs = self._convert_physical_quantity_and_ac_type_to_cols(**loader_kwargs)
File "C:\Users\mime02\Anaconda3\envs\nilm\lib\site-packages\nilmtk\elecmeter.py", line 560, in _convert_physical_quantity_and_ac_type_to_cols
raise MeasurementError(msg)
nilmtk.exceptions.MeasurementError: AC type 'apparent' not available. Available columns = [('power', 'active')].
Closing remaining open files:C:\Users\refit.h5
How can I fix this issue?
Hope you can help
My code is:
refit = {
'power': {
'mains': ['apparent', 'active'],
'appliance': ['apparent', 'active']
},
'sample_rate': 100,
'appliances': ['fridge'],
'methods': {
"CombinatorialOptimisation": CO({}),
"FHMM_EXACT": FHMMExact({'num_of_states': 2}),
'WindowGRU': WindowGRU({'n_epochs': 1, 'batch_size': 32}),
'RNN': RNN({'n_epochs': 1, 'batch_size': 32}),
'DAE': DAE({'n_epochs': 1, 'batch_size': 32}),
'Seq2Point': Seq2Point({'n_epochs': 1, 'batch_size': 32}),
'Seq2Seq': Seq2Seq({'n_epochs': 1, 'batch_size': 32}),
},
'train': {
'datasets': {
'Dataport': {
'path': r'C:\Users\refit.h5',
'buildings': {
2: {
'start_time': '2013-10-10',
'end_time': '2013-10-20'
},
}
}
}
},
'test': {
'datasets': {
'Dataport': {
'path': r'C:\Users\refit.h5',
'buildings': {
2: {
'start_time': '2013-11-01',
'end_time': '2013-11-11'
},
}
}
},
'metrics': ['mae', 'rmse']
}
}
api_res = API(refit)
Hi,
I installed the nilmtk-contrib on both Windows 10 and Ubuntu16.04 (runing in a virtual box in win10) with:
conda create -n nilm -c conda-forge -c nilmtk nilmtk-contrib
Then, I run the example notebook "Using the API with NILMTK-CONTRIB",
from nilmtk_contrib.disaggregate import DAE,Seq2Point, Seq2Seq, RNN, WindowGRU
redd = {
'power': {
'mains': ['apparent','active'],
'appliance': ['apparent','active']
},
'sample_rate': 60,
'appliances': ['fridge'],
'methods': {
'WindowGRU':WindowGRU({'n_epochs':50,'batch_size':32}),
'RNN':RNN({'n_epochs':50,'batch_size':32}),
'DAE':DAE({'n_epochs':50,'batch_size':32}),
'Seq2Point':Seq2Point({'n_epochs':50,'batch_size':32}),
'Seq2Seq':Seq2Seq({'n_epochs':50,'batch_size':32}),
'Mean': Mean({}),
},
'train': {
'datasets': {
'Dataport': {
'path': '/home/ubuntu/Desktop/nilmtk-contrib/redd.hdf5',
'buildings': {
6: {
'start_time': '2015-04-04',
'end_time': '2015-04-05'
},
# 56: {
# 'start_time': '2015-01-28',
# 'end_time': '2015-01-30'
# },
}
}
}
},
'test': {
'datasets': {
'Datport': {
'path': '/home/ubuntu/Desktop/nilmtk-contrib/redd.hdf5',
'buildings': {
6: {
'start_time': '2015-04-05',
'end_time': '2015-04-06'
},
}
}
},
'metrics':['mae']
}
}
I got the following error in both win10 and ubuntu system:
api_res = API(redd)
Started training for WindowGRU
Joint training for WindowGRU
............... Loading Data for training ...................
Loading data for Dataport dataset
Loading building ... 6
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-10-6b963021b56a> in <module>
----> 1 api_res = API(redd)
~/anaconda3/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in __init__(self, params)
44 self.DROP_ALL_NANS = params.get("DROP_ALL_NANS", True)
45 self.site_only = params.get('site_only',False)
---> 46 self.experiment()
47
48
~/anaconda3/envs/nilm/lib/python3.7/site-packages/nilmtk/api.py in experiment(self)
89 else:
90 print ("Joint training for ",clf.MODEL_NAME)
---> 91 self.train_jointly(clf,d)
92
93 print ("Finished training for ",clf.MODEL_NAME)
...
~/anaconda3/envs/nilm/lib/python3.7/site-packages/pandas/core/resample.py in _adjust_dates_anchored(first, last, offset, closed, base)
1763 first_tzinfo = first.tzinfo
1764 last_tzinfo = last.tzinfo
-> 1765 start_day_nanos = first.normalize().value
1766 if first_tzinfo is not None:
1767 first = first.tz_convert("UTC")
AttributeError: 'NaTType' object has no attribute 'normalize'
Any idea what's the error?
And I got the data set "redd.hdf5" with the following code:
import warnings
warnings.filterwarnings('ignore')
from nilmtk.dataset_converters.redd import convert_redd
filename = "{path}/redd.hdf5"
convert_redd.convert_redd("{path}/REDD/low_freq", filename)
They don't run as-is and there are some other problems reported in other issues.
We should also remove the PS Dataport or reduce its use in the examples due to issues reported in nilmtk/nilmtk#873
hello,
I need your help to solve this specific issue.I am trying to run the notebook Here is the error shown while running the code.
why cloning the repository in the Lib/Site-packages in your environment
why not just add setup.py?
> [34/35] RUN conda create -n nilmtk-env2 -c conda-forge -c nilmtk nilmtk-contrib:
#39 1.179 Collecting package metadata (current_repodata.json): ...working... done
#39 3.623 Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source.
#39 4.509 Collecting package metadata (repodata.json): ...working... done
#39 21.63 Found conflicts! Looking for incompatible packages.
#39 21.63 This can take several minutes. Press CTRL-C to abort.
#39 21.63 failed
#39 21.63
#39 21.63 UnsatisfiableError: The following specifications were found
#39 21.63 to be incompatible with the existing python installation in your environment:
#39 21.63
#39 21.63 Specifications:
#39 21.63
#39 21.63 - cvxpy[version='>=1.0.0'] -> python[version='3.10.*|>=3.10,<3.11.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0|3.9.*|3.8.*|3.7.*|3.6.*',build='*_73_pypy|*_cpython']
#39 21.63 - keras[version='>=2.2.4'] -> python[version='>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.7,<3.8.0a0|>=3.7,<3.8.0a0|>=3.9,<3.10.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0|>=3.8,<3.9.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0',build='*_73_pypy|*_73_pypy|*_cpython|*_73_pypy']
#39 21.63 - nilmtk[version='>=0.4'] -> python[version='>=2.7,<2.8.0a0|>=3.5|>=3.6,<3.7.0a0|>=3.8,<3.9.0a0|>=3.7,<3.8.0a0|>=3.8.0a,<3.9.0a0|>=3.9,<3.10.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.10,<3.11.0a0|>=3.10,<3.11.0a0',build='*_73_pypy|*_cpython|*_cpython|*_cpython|*_cpython']
#39 21.63
#39 21.63 Your python: python[version='>=3.6']
#39 21.63
#39 21.63 If python is on the left-most side of the chain, that's the version you've asked for.
#39 21.63 When python appears to the right, that indicates that the thing on the left is somehow
#39 21.63 not available for the python version you are constrained to. Note that conda will not
#39 21.63 change your python version to a different minor version unless you explicitly specify
#39 21.63 that.
#39 21.63
#39 21.63 The following specifications were found to be incompatible with each other:
#39 21.63
#39 21.63 Output in format: Requested package -> Available versions
#39 21.63
#39 21.63 Package pypy3.9 conflicts for:
#39 21.63 python[version='>=3.6'] -> pypy3.9=7.3.8
#39 21.63 python[version='>=3.6'] -> python_abi==3.9[build=*_pypy39_pp73] -> pypy3.9=7.3
#39 21.63
#39 21.63 Package libuuid conflicts for:
#39 21.63 cvxpy[version='>=1.0.0'] -> python[version='>=3.10,<3.11.0a0'] -> libuuid[version='>=2.32.1,<3.0a0']
#39 21.63 nilmtk[version='>=0.4'] -> python[version='>=3.6'] -> libuuid[version='>=2.32.1,<3.0a0']
#39 21.63 python[version='>=3.6'] -> libuuid[version='>=2.32.1,<3.0a0']
#39 21.63 keras[version='>=2.2.4'] -> python[version='>=3.6'] -> libuuid[version='>=2.32.1,<3.0a0']
#39 21.63
#39 21.63 Package setuptools conflicts for:
#39 21.63 nilmtk[version='>=0.4'] -> matplotlib-base[version='>=3.1.0,<3.2.0'] -> setuptools[version='<60.0.0']
#39 21.63 python[version='>=3.6'] -> pip -> setuptools
#39 21.63
#39 21.63 Package liblapack conflicts for:
#39 21.63 nilmtk[version='>=0.4'] -> numpy[version='>=1.13.3,<1.20'] -> liblapack[version='>=3.8.0,<4.0.0a0|>=3.8.0,<4.0a0']
#39 21.63 keras[version='>=2.2.4'] -> numpy[version='>=1.9.1'] -> liblapack[version='>=3.8.0,<4.0.0a0|>=3.8.0,<4.0a0']
#39 21.63 cvxpy[version='>=1.0.0'] -> scs[version='>=1.1.6'] -> liblapack[version='>=3.8.0,<4.0.0a0|>=3.8.0,<4.0a0']
#39 21.63
#39 21.63 Package expat conflicts for:
#39 21.63 cvxpy[version='>=1.0.0'] -> pypy3.7[version='>=7.3.7'] -> expat[version='>=2.2.9,<3.0.0a0|>=2.3.0,<3.0a0|>=2.4.1,<3.0a0']
#39 21.63 python[version='>=3.6'] -> pypy3.9=7.3.8 -> expat[version='>=2.2.9,<3.0.0a0|>=2.3.0,<3.0a0|>=2.4.1,<3.0a0|>=2.4.7,<3.0a0']
#39 21.63
#39 21.63 Package libgfortran5 conflicts for:
#39 21.63 nilmtk[version='>=0.4'] -> scipy[version='>=1.0.0'] -> libgfortran5[version='>=9.3.0|>=9.4.0']
#39 21.63 keras[version='>=2.2.4'] -> scipy[version='>=0.14'] -> libgfortran5[version='>=9.3.0|>=9.4.0']
#39 21.63
#39 21.63 Package pypy3.8 conflicts for:
#39 21.63 python[version='>=3.6'] -> python_abi==3.8[build=*_pypy38_pp73] -> pypy3.8=7.3
#39 21.63 python[version='>=3.6'] -> pypy3.8=7.3.8
#39 21.63
#39 21.63 Package pypy3.6 conflicts for:
#39 21.63 python[version='>=3.6'] -> python_abi==3.6[build=*_pypy36_pp73] -> pypy3.6=7.3
#39 21.63 python[version='>=3.6'] -> pypy3.6[version='7.3.0.*|7.3.1.*|7.3.2.*|7.3.3.*']
#39 21.63
#39 22.63 ERROR conda.cli.main_run:execute(41): `conda run /bin/bash -c conda create -n nilmtk-env2 -c conda-forge -c nilmtk nilmtk-contrib` failed. (See above for error)
------
executor failed running [conda run --no-capture-output -n nilmtk-env /bin/bash -c conda create -n nilmtk-env2 -c conda-forge -c nilmtk nilmtk-contrib]: exit code: 1
For optimization problem, it is trustworthy that the optimization procedure should be completed ASAP.
But DSC algorithm cannot convergence before the arrival of the maximal times. Why?
When infering with AFHMM+SAC, one disaggregation thread can fail. This happens to the disaggregation thread that deals with the incomplete block, i.e. with the tail of the chunk. The failure happens during the optimization. The solver does not find any suitable constraint for the appliance and returns a Variable
with a None
value. This None
instead of a np.array
causes the later alternative minimization to fail.
Generating predictions for : AFHMM_SAC
Process Process-10:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File ".venv/lib/python3.8/site-packages/nilmtk_contrib-0.1.2.dev1+git.de38dab-py3.8.egg/nilmtk_contrib/disaggregate/afhmm_sac.py", line 172, in disaggregate_thread
app_usage= np.sum(s_[appliance_id]@means_vector[appliance_id],axis=1)
ValueError: matmul: Input operand 0 does not have enough dimensions (has 0, gufunc core with signature (n?,k),(k,m?)->(n?,m?) requires 1
Reproductible with chunkwise training and testing on REFIT building 1 between 2014-03-08 and 2014-04-11.
import nilmtk.api
import nilmtk_contrib
params = {
"power": { "mains": [ "apparent", "active" ], "appliance": [ "active" ] },
"appliances": [ "dish washer", ],
"artificial_aggregate": False,
"chunk_size": 2**15,
"sample_rate": 60,
"DROP_ALL_NANS": True,
"methods": {
"AFHMM+SAC": nilmtk_contrib.disaggregate.AFHMM_SAC({
"default_num_states": 2,
"chunk_wise_training": True,
"time_period": 720,
}),
},
"train": {
"datasets": {
"REDD": {
"path": "datasets/REDD/redd.h5",
"buildings": {
1: { "start_time": "2011-04-18", "end_time": "2011-04-28" },
}},
},
},
"test": {
"datasets": {
"REFIT": {
"path": "datasets/REFIT/refit.h5",
"buildings": {
1: { "start_time": "2014-03-08", "end_time": "2014-04-11" },
}},
},
"metrics": [ "mae", ],
},
}
api_res = nilmtk.api.API(params)
Solution: Test if the Variable
is empty before adding it to the constraints list s_
here. If the variable is empty, append a np.zeros
with the right shape.
Hi!
I have three problems with NILMTK-contrib.
My dataset has no 'apparent' ac_type, when I use
experiment1 = { 'power': {'mains': ['active'],'appliance': ['active']}, 'sample_rate': 60,
the error :
nilmtk.exceptions.MeasurementError: AC type '['active']' not available. Available columns = [('power', 'apparent')].
ModuleNotFoundError: No module named 'tensorflow_core.estimator'
How to get the prediction results from models after training. I mean I need to get each electrical appliances data as input for the next experiment.
I find that when I use conda command to download NILMTK-contrib. The version of nilmtk-contrib is 0.1.0. But in the guide notebook, the version is 0.1.2. Should I update my nilmtk-contrib version to solve the first and the second problems?
Thanks!
Hi,
I'm trying to correctly understand the implementation of nilmtk-contrib. From NILMTK API Tutorial.ipynb
notebook In [3]:
, the sample rate in experiment1 is 60. The comment below In [3]:
describes that you have set the sample rate at 60Hz. The highest resolution for dataport dataset is 1Hz. I think the comment should be changed to you have set the sample rate at 1/60Hz.
Please correct me if I understand wrong.
Hi!
Since the latest update which switched from sklearn train_test_split to the built in keras method the RNN implementation doesnt work anymore, presumably because the fit function still takes train_x and train_y as inputs, which should be train_main and power respectively instead. Should be easy to fix :)
model.fit(
train_x, train_y,
validation_split=.15,
epochs=self.n_epochs,
batch_size=self.batch_size,
callbacks=[ checkpoint ],
)
Cheers!
How does nilmtk-contrib work with nilmk, what is the specific operation under Anacoda, thank you, I have the same problem as the previous scholar, "No module named 'nilmtk.disaggregate'"
hello,
I want to know the paper of the disaggregationmethod, but the paper link you gave in the readme document is invalid, I hope you can update it, thank you.
Hi,
I tried to run the following code:
import warnings
warnings.filterwarnings("ignore")
from nilmtk.api import API
from nilmtk.disaggregate import Mean
from nilmtk_contrib.disaggregate import DAE,Seq2Point, Seq2Seq, RNN, WindowGRU
redd = {
'power': {
'mains': ['apparent','active'],
'appliance': ['apparent','active']
},
'sample_rate': 60,
'appliances': ['fridge'],
'methods': {
'WindowGRU':WindowGRU({'n_epochs':50,'batch_size':32}),
'RNN':RNN({'n_epochs':50,'batch_size':32}),
'DAE':DAE({'n_epochs':50,'batch_size':32}),
'Seq2Point':Seq2Point({'n_epochs':50,'batch_size':32}),
'Seq2Seq':Seq2Seq({'n_epochs':50,'batch_size':32}),
'Mean': Mean({}),
},
'train': {
'datasets': {
'REDD': {
'path': 'C:/git/DNN-NILM/data/redd.hdf5',
'buildings': {
1: {
'start_time': '2011-04-17',
'end_time': '2011-04-27'
},
# 56: {
# 'start_time': '2015-01-28',
# 'end_time': '2015-01-30'
# },
}
}
}
},
'test': {
'datasets': {
'REDD': {
'path': 'C:/git/DNN-NILM/data/redd.hdf5',
'buildings': {
1: {
'start_time': '2013-01-05',
'end_time': '2013-01-08'
},
}
}
},
'metrics':['mae']
}
}
Then, I got the error after the training was done:
api_res = API(redd)
Started training for WindowGRU
Joint training for WindowGRU
............... Loading Data for training ...................
Loading data for REDD dataset
Loading building ... 1
Loading data for meter ElecMeterID(instance=2, building=1, dataset='REDD')
Done loading data all meters for this chunk.
Dropping missing values
Training processing
First model training for fridge
WARNING:tensorflow:From C:\Users\WuTeng\anaconda3\envs\nilm\lib\site-packages\keras\backend\tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
Train on 10206 samples, validate on 1802 samples
Epoch 1/50
10206/10206 [==============================] - 70s 7ms/step - loss: 0.0104 - val_loss: 0.0071
Epoch 00001: val_loss improved from inf to 0.00709, saving model to windowgru-temp-weights-74894.h5
Epoch 2/50
10206/10206 [==============================] - 75s 7ms/step - loss: 0.0067 - val_loss: 0.0043
...
Epoch 00048: val_loss did not improve from 0.00116
Epoch 49/50
10206/10206 [==============================] - 67s 7ms/step - loss: 0.0020 - val_loss: 0.0018
Epoch 00049: val_loss did not improve from 0.00116
Epoch 50/50
10206/10206 [==============================] - 68s 7ms/step - loss: 0.0017 - val_loss: 0.0012
Epoch 00050: val_loss did not improve from 0.00116
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-15-6b963021b56a> in <module>
----> 1 api_res = API(redd)
~\anaconda3\envs\nilm\lib\site-packages\nilmtk\api.py in __init__(self, params)
44 self.DROP_ALL_NANS = params.get("DROP_ALL_NANS", True)
45 self.site_only = params.get('site_only',False)
---> 46 self.experiment()
47
48
~\anaconda3\envs\nilm\lib\site-packages\nilmtk\api.py in experiment(self)
89 else:
90 print ("Joint training for ",clf.MODEL_NAME)
---> 91 self.train_jointly(clf,d)
92
93 print ("Finished training for ",clf.MODEL_NAME)
~\anaconda3\envs\nilm\lib\site-packages\nilmtk\api.py in train_jointly(self, clf, d)
238 self.train_submeters = appliance_readings
239
--> 240 clf.partial_fit(self.train_mains,self.train_submeters)
241
...
~\anaconda3\envs\nilm\lib\site-packages\keras\engine\network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
1228 else:
1229 saving.load_weights_from_hdf5_group(
-> 1230 f, self.layers, reshape=reshape)
1231 if hasattr(f, 'close'):
1232 f.close()
~\anaconda3\envs\nilm\lib\site-packages\keras\engine\saving.py in load_weights_from_hdf5_group(f, layers, reshape)
1181 """
1182 if 'keras_version' in f.attrs:
-> 1183 original_keras_version = f.attrs['keras_version'].decode('utf8')
1184 else:
1185 original_keras_version = '1'
AttributeError: 'str' object has no attribute 'decode'
Any idea what's the issue? Keras version is wrong?
And here is my package list:
absl-py 0.12.0 pyhd8ed1ab_0 conda-forge
anyio 3.0.1 py37h03978a9_0 conda-forge
argon2-cffi 20.1.0 py37hcc03f2d_2 conda-forge
astor 0.8.1 pyh9f0ad1d_0 conda-forge
async_generator 1.10 py_0 conda-forge
attrs 21.2.0 pyhd8ed1ab_0 conda-forge
babel 2.9.1 pyh44b312d_0 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 py_2 conda-forge
backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge
bleach 3.3.0 pyh44b312d_0 conda-forge
blosc 1.21.0 h0e60522_0 conda-forge
brotlipy 0.7.0 py37hcc03f2d_1001 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
ca-certificates 2020.12.5 h5b45459_0 conda-forge
cached-property 1.5.2 hd8ed1ab_1 conda-forge
cached_property 1.5.2 pyha770c72_1 conda-forge
certifi 2020.12.5 py37h03978a9_1 conda-forge
cffi 1.14.5 py37hd8e9650_0 conda-forge
chardet 4.0.0 py37h03978a9_1 conda-forge
colorama 0.4.4 pyh9f0ad1d_0 conda-forge
cryptography 3.4.7 py37h20c650d_0 conda-forge
cvxpy 1.1.12 py37h03978a9_0 conda-forge
cvxpy-base 1.1.12 py37h90003fb_0 conda-forge
cycler 0.10.0 py_2 conda-forge
dataclasses 0.8 pyhc8e2a94_1 conda-forge
decorator 5.0.9 pyhd8ed1ab_0 conda-forge
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
ecos 2.0.8 py37hebb4d16_0 conda-forge
entrypoints 0.3 pyhd8ed1ab_1003 conda-forge
freetype 2.10.4 h546665d_1 conda-forge
gast 0.4.0 pyh9f0ad1d_0 conda-forge
google-pasta 0.2.0 pyh8c360ce_0 conda-forge
grpcio 1.37.1 py37h04d2302_0 conda-forge
h5py 3.2.1 nompi_py37he280515_100 conda-forge
hdf5 1.10.6 nompi_h5268f04_1114 conda-forge
hmmlearn 0.2.5 py37hda49f71_0 conda-forge
idna 2.10 pyh9f0ad1d_0 conda-forge
importlib-metadata 4.0.1 py37h03978a9_0 conda-forge
intel-openmp 2021.2.0 h57928b3_616 conda-forge
ipykernel 5.5.5 py37h7813e69_0 conda-forge
ipython 7.23.1 py37h7813e69_0 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
jedi 0.18.0 py37h03978a9_2 conda-forge
jinja2 3.0.0 pyhd8ed1ab_0 conda-forge
joblib 1.0.1 pyhd8ed1ab_0 conda-forge
json5 0.9.5 pyh9f0ad1d_0 conda-forge
jsonschema 3.2.0 pyhd8ed1ab_3 conda-forge
jupyter_client 6.1.12 pyhd8ed1ab_0 conda-forge
jupyter_core 4.7.1 py37h03978a9_0 conda-forge
jupyter_server 1.7.0 py37h03978a9_1 conda-forge
jupyterlab 3.0.15 pyhd8ed1ab_0 conda-forge
jupyterlab_pygments 0.1.2 pyh9f0ad1d_0 conda-forge
jupyterlab_server 2.5.1 pyhd8ed1ab_0 conda-forge
keras 2.3.1 py37h21ff451_0 conda-forge
keras-applications 1.0.8 py_1 conda-forge
keras-preprocessing 1.1.2 pyhd8ed1ab_0 conda-forge
kiwisolver 1.3.1 py37h8c56517_1 conda-forge
krb5 1.19.1 hbae68bd_0 conda-forge
libblas 3.9.0 9_mkl conda-forge
libcblas 3.9.0 9_mkl conda-forge
libcurl 7.76.1 h789b8ee_2 conda-forge
libgpuarray 0.7.6 h8ffe710_1003 conda-forge
liblapack 3.9.0 9_mkl conda-forge
libpng 1.6.37 h1d00b33_2 conda-forge
libprotobuf 3.17.0 h7755175_0 conda-forge
libsodium 1.0.18 h8d14728_1 conda-forge
libssh2 1.9.0 h680486a_6 conda-forge
m2w64-gcc-libgfortran 5.3.0 6 conda-forge
m2w64-gcc-libs 5.3.0 7 conda-forge
m2w64-gcc-libs-core 5.3.0 7 conda-forge
m2w64-gmp 6.1.0 2 conda-forge
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge
mako 1.1.4 pyh44b312d_0 conda-forge
markdown 3.3.4 pyhd8ed1ab_0 conda-forge
markupsafe 2.0.0 py37hcc03f2d_0 conda-forge
matplotlib-base 3.1.3 py37h2981e6d_0 conda-forge
matplotlib-inline 0.1.2 pyhd8ed1ab_2 conda-forge
mistune 0.8.4 py37hcc03f2d_1003 conda-forge
mkl 2021.2.0 hb70f87d_389 conda-forge
mock 4.0.3 py37h03978a9_1 conda-forge
msys2-conda-epoch 20160418 1 conda-forge
nbclassic 0.2.8 pyhd8ed1ab_0 conda-forge
nbclient 0.5.3 pyhd8ed1ab_0 conda-forge
nbconvert 6.0.7 py37h03978a9_3 conda-forge
nbformat 5.1.3 pyhd8ed1ab_0 conda-forge
nest-asyncio 1.5.1 pyhd8ed1ab_0 conda-forge
networkx 2.1 py_1 conda-forge
nilm_metadata 0.2.4 0 nilmtk
nilmtk 0.4.3 py_0 nilmtk
nilmtk-contrib 0.1.1 py_0 nilmtk
nose 1.3.7 py_1006 conda-forge
notebook 6.3.0 pyha770c72_1 conda-forge
numexpr 2.7.3 py37h08fd248_0 conda-forge
numpy 1.19.5 py37hd20adf4_1 conda-forge
openssl 1.1.1k h8ffe710_0 conda-forge
osqp 0.6.2 py37h08fd248_1 conda-forge
packaging 20.9 pyh44b312d_0 conda-forge
pandas 0.25.3 py37he350917_0 conda-forge
pandoc 2.13 h8ffe710_0 conda-forge
pandocfilters 1.4.2 py_1 conda-forge
parso 0.8.2 pyhd8ed1ab_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pip 21.1.1 pyhd8ed1ab_0 conda-forge
prometheus_client 0.10.1 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.18 pyha770c72_0 conda-forge
protobuf 3.17.0 py37hf2a7229_0 conda-forge
pycparser 2.20 pyh9f0ad1d_2 conda-forge
pygments 2.9.0 pyhd8ed1ab_0 conda-forge
pygpu 0.7.6 py37hda49f71_1002 conda-forge
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pyreadline 2.1 py37h03978a9_1003 conda-forge
pyrsistent 0.17.3 py37hcc03f2d_2 conda-forge
pysocks 1.7.1 py37h03978a9_3 conda-forge
pytables 3.6.1 py37hdc91d43_3 conda-forge
python 3.7.10 h7840368_100_cpython conda-forge
python-dateutil 2.8.1 py_0 conda-forge
python_abi 3.7 1_cp37m conda-forge
pytz 2021.1 pyhd8ed1ab_0 conda-forge
pywin32 300 py37hcc03f2d_0 conda-forge
pywinpty 1.0.1 py37h7f67f24_0 conda-forge
pyyaml 5.4.1 py37hcc03f2d_0 conda-forge
pyzmq 22.0.3 py37hcce574b_1 conda-forge
qdldl-python 0.1.5 py37h08fd248_0 conda-forge
requests 2.25.1 pyhd3deb0d_0 conda-forge
scikit-learn 0.24.2 py37h8ded0a9_0 conda-forge
scipy 1.6.3 py37h924764e_0 conda-forge
scs 2.1.3 py37he58051b_0 conda-forge
send2trash 1.5.0 py_0 conda-forge
setuptools 49.6.0 py37h03978a9_3 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sniffio 1.2.0 py37h03978a9_1 conda-forge
sqlite 3.35.5 h8ffe710_0 conda-forge
tbb 2021.2.0 h2d74725_0 conda-forge
tensorboard 1.14.0 py37_0 conda-forge
tensorflow 1.14.0 h1f41ff6_0 conda-forge
tensorflow-base 1.14.0 py37hc8dfbb8_0 conda-forge
tensorflow-estimator 1.14.0 py37h5ca1d4c_0 conda-forge
termcolor 1.1.0 py_2 conda-forge
terminado 0.9.4 py37h03978a9_0 conda-forge
testpath 0.4.4 py_0 conda-forge
theano 1.0.5 py37hf2a7229_1 conda-forge
threadpoolctl 2.1.0 pyh5ca1d4c_0 conda-forge
tk 8.6.10 h8ffe710_1 conda-forge
tornado 6.1 py37hcc03f2d_1 conda-forge
traitlets 5.0.5 py_0 conda-forge
typing_extensions 3.7.4.3 py_0 conda-forge
urllib3 1.26.4 pyhd8ed1ab_0 conda-forge
vc 14.2 hb210afc_4 conda-forge
vs2015_runtime 14.28.29325 h5e1d092_4 conda-forge
vs2017_win-64 19.16.27038 h2e3bad8_2 conda-forge
vswhere 2.8.4 h57928b3_0 conda-forge
wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge
webencodings 0.5.1 py_1 conda-forge
websocket-client 0.57.0 py37h03978a9_4 conda-forge
werkzeug 2.0.0 pyhd8ed1ab_0 conda-forge
wheel 0.36.2 pyhd3deb0d_0 conda-forge
win_inet_pton 1.1.0 py37h03978a9_2 conda-forge
wincertstore 0.2 py37h03978a9_1006 conda-forge
winpty 0.4.3 4 conda-forge
wrapt 1.12.1 py37hcc03f2d_3 conda-forge
yaml 0.2.5 he774522_0 conda-forge
zeromq 4.3.4 h0e60522_0 conda-forge
zipp 3.4.1 pyhd8ed1ab_0 conda-forge
zlib 1.2.11 h62dcd97_1010 conda-forge
Hi,
I tried to use the dataset UK-DALE in some experiments. Could it be that there are some problems with that particular dataset?
This is the error message I get for DAE:
Using TensorFlow backend.
Started training for DAE
Joint training for DAE
............... Loading Data for training ...................
Loading data for UK-DALE dataset
Loading building ... 1
Dropping missing values
Train Jointly
(4812480, 1) (4812480, 1) MultiIndex([('power', 'active')],
names=['physical_quantity', 'type']) MultiIndex([('power', 'active')],
names=['physical_quantity', 'type'])
Doing Preprocessing
Traceback (most recent call last):
File "ukstudies.py", line 275, in <module>
api_results = API(experiments[experiment_name])
File "/home/users/chklemen/anaconda3/envs/mirum/lib/python3.6/site-packages/nilmtk/api.py", line 59, in __init__
self.experiment(params)
File "/home/users/chklemen/anaconda3/envs/mirum/lib/python3.6/site-packages/nilmtk/api.py", line 104, in experiment
self.train_jointly(clf,d)
File "/home/users/chklemen/anaconda3/envs/mirum/lib/python3.6/site-packages/nilmtk/api.py", line 257, in train_jointly
clf.partial_fit(self.train_mains,self.train_submeters)
File "/home/users/chklemen/anaconda3/envs/mirum/lib/python3.6/site-packages/nilmtk_contrib/dae.py", line 61, in partial_fit
app_df = pd.concat(app_df,axis=0).values
File "/home/users/chklemen/anaconda3/envs/mirum/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 258, in concat
return op.get_result()
File "/home/users/chklemen/anaconda3/envs/mirum/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 473, in get_result
mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy
File "/home/users/chklemen/anaconda3/envs/mirum/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 2044, in concatenate_block_managers
values = values.copy()
MemoryError: Unable to allocate array with shape (99, 4812391) and data type float64
Closing remaining open files:/home/users/chklemen/ukdale.h5...done
I use the latest public version of UK-DALE.
Thanks!
Hi,
I have been experiencing issues with the DSC algorithm lately:
Started training for DSC
Joint training for DSC
............... Loading Data for training ...................
Loading data for REFIT dataset
Loading building ... 1
Dropping missing values
...............DSC partial_fit running...............
Training First dictionary for television
Traceback (most recent call last):
File "main_stuy.py", line 122, in <module>
api_results_real = API(experiment_real)
File "/Users/christoph/anaconda/envs/thesis/lib/python3.6/site-packages/nilmtk/api.py", line 62, in __init__
self.experiment(params)
File "/Users/christoph/anaconda/envs/thesis/lib/python3.6/site-packages/nilmtk/api.py", line 107, in experiment
self.train_jointly(clf,d)
File "/Users/christoph/anaconda/envs/thesis/lib/python3.6/site-packages/nilmtk/api.py", line 260, in train_jointly
clf.partial_fit(self.train_mains,self.train_submeters)
File "/Users/christoph/anaconda/envs/thesis/lib/python3.6/site-packages/nilmtk_contrib/disaggregate/dsc.py", line 141, in partial_fit
self.learn_dictionary(power, appliance_name)
File "/Users/christoph/anaconda/envs/thesis/lib/python3.6/site-packages/nilmtk_contrib/disaggregate/dsc.py", line 49, in learn_dictionary
model.fit(appliance_main.T)
File "/Users/christoph/anaconda/envs/thesis/lib/python3.6/site-packages/sklearn/decomposition/_dict_learning.py", line 1439, in fit
positive_code=self.positive_code)
File "/Users/christoph/anaconda/envs/thesis/lib/python3.6/site-packages/sklearn/decomposition/_dict_learning.py", line 756, in dict_learning_online
_check_positive_coding(method, positive_code)
File "/Users/christoph/anaconda/envs/thesis/lib/python3.6/site-packages/sklearn/decomposition/_dict_learning.py", line 28, in _check_positive_coding
"coding method.".format(method)
ValueError: Positive constraint not supported for 'lars' coding method.
Closing remaining open files:../../../data/REFIT.h5...done../../../data/REFIT.h5...done../../../data/REFIT.h5...done../../../data/REFIT.h5...done
I am using:
nilmtk_contrib.__version__
'0.1.0.dev1+git.cfb3b14'
nilmtk.__version__
'0.4.0.dev1+git.5956d31'
sklearn.__version__
'0.22.2.post1'
Any ideas on how to fix this?
Best,
Christoph
Hi Sir,
whether or not the dataset is handled by the dataset_converter to the hdf5 files and is also supplied for the research community?
Or else, there is still few researcher who can use the nilmtk and nilmtk-contrib for research.
And the phenomenon will continue to exist, "few pubilication presenting algorithmic contributions within the field went on to contribute implementations back to the toolkit".
for i in gt_overall.columns:
plt.figure()
#plt.plot(self.test_mains[0],label='Mains reading')
plt.plot(gt_overall[i],label='Ground Truth')
for clf in pred_overall:
plt.plot(pred_overall[clf][i],label=clf)
plt.xticks(rotation=90)
plt.title(i)
plt.legend()
plt.show()
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.