Coder Social home page Coder Social logo

vae-nilm's Introduction

Energy Disaggregation using Variational Autoencoders

This code implements the Variational Autoencoders model used in the paper :

Langevin, A., Carbonneau, M. A., Cheriet, M., & Gagnon, G. (2021). Energy Disaggregation using Variational Autoencoders. arXiv preprint arXiv:2103.12177.

Comparison methods:

Kelly, J., & Knottenbelt, W. (2015, November). Neural nilm: Deep neural networks applied to energy disaggregation. In Proceedings of the 2nd ACM international conference on embedded systems for energy-efficient built environments (pp. 55-64).

https://github.com/JackKelly/neuralnilm

Chaoyun Zhang, Mingjun Zhong, Zongzuo Wang, Nigel Goddard, and Charles Sutton. "Sequence-to-point learning with neural networks for nonintrusive load monitoring." Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), Feb. 2-7, 2018.

https://github.com/MingjunZhong/seq2point-nilm

S2SSPan, Y., Liu, K., Shen, Z., Cai, X., & Jia, Z. (2020, May). Sequence-to-subsequence learning with conditional gan for power disaggregation. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 3202-3206). IEEE.

https://github.com/DLZRMR/seq2subseq

Setup

  1. Create your own environment with Python > 3.6
  2. Configure deep learning environment with Tensorflow
  3. Install others requirement packages
  4. Clone this repository

Datasets and preprocessing

  1. Download UKDALE files and extract .dat files in each house folder.

Example:

Data/
|-- UKDALE/
|   |-- house_1
|   |   |-- channel1.dat
|   |   |-- channel2.dat
|   |   |-- ...
|   |-- house_2
|   |   |-- channel1.dat
|   |   |-- ...
|   |-- ...
  1. Execute the preprocess code
python uk_dale_preprocess.py

It will generate these files for each house and the each appliance:

Data/
|-- UKDALE/
|   |-- Dishwasher_appliance_house_1
|   |-- Dishwasher_main_house_1
|   |-- Fridge_appliance_house_1
|   |-- Fridge_main_house_1
|   |-- ...
|   |-- Dishwasher_appliance_house_2
|   |-- Dishwasher_main_house_2
|   |-- Fridge_appliance_house_2
|   |-- Fridge_main_house_2
|   |-- ...

Training and testing

The training is performed with the following command:

python NILM_disaggregation.py --gpu 0 --config Config/House_2/WashingMachine_VAE.json

Where --gpu is used to select a specific GPU, and --config to select the config file associated with the training to execute.

The test is performed with the following command:

python NILM_test.py --gpu 0 --config Config/House_2/WashingMachine_VAE.json

The script tests the last trained model of the selected configuration. It predicts the energy disaggregation on the test data e.g., house 2 and saves it in "pred_1.npy". It also prints the results for the metrics: MAE, ACC, PRECISION, RECALL, F1-SCORE, SAE and saves the scores in "results_median.npy".

Example:

Best Epoch : 82
6.366289849142183 # MAE
0.8244607666324364 # ACC
0.8333902355752817 # PREC
0.9463532832566028 # RECALL
0.8862867905689065 # F1-SCORE
[0.35107847] # SAE

vae-nilm's People

Contributors

etssmartres avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

vae-nilm's Issues

Predicted Effect

Hello, thank you for your efforts on Nilm research. Your published papers and vae_nilm project are very helpful for my research.I tried to train the Nilm model with your project. The data is uk_dale low frequency data of washing machine with epoch 15 (I see that each epoch takes about 20 minutes and there are large fluctuations at about 12 times). The final training result is not as described in your article, please tell me if I have a problem in parameter selection.
WashingMachine1
1635844583(1)

run error

tensorflow.python.framework.errors_impl.FailedPreconditionError: Could not find variable conv2d_transpose_5/bias. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status error message=Resource localhost/conv2d_transpose_5/bias/N10tensorflow3VarE does not exist.
[[{{node training/RMSprop/RMSprop/update_conv2d_transpose_5/bias/ReadVariableOp_1}}]]
I have this error when run NILM_disaggregation.py.
what cause this question, how to solve?

When I try to run the VAE model it gives me multiple inputs error

I didn't change any of the code but when I run the training it gives me this error

ValueError: Layer "model" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 1024, 1) dtype=float32>]

could you tell me how can I solve it?

run error

Hello, after I configured the file according to your requirements, I ran the code and found that the following error was reported.It would be a big help if anyone can guide a way!
Thanks in Advance :)

image

about Conv1DTranspose

Hello!
When I was learning from your code, I found a "Conv1DTranspose" function implemented by youself in VAE_function.py.
Is there any potential problem when using the offical "Conv1DTranspose" function to upsamle the data?
I am confused.
Thanks in advance if you would like to answer my question.

image

Poor Reproduction Results of the Paper

Dear Author,

I have reimplemented your VAE model using the Disaggregator class from nilmtk_contrib. The experimental hyperparameters are as follows: epoch=10,batch size = 64, window size = 512, learning rate = 3e-4,optimizer=Adam,validation rate = 0.15. The experimental dataset used four months of UK-DALE data for training and two months for testing. The training loss is defined according to your code, including reconstruction loss and KL divergence loss. The checkpoint strategy during training was also adopted from your code, using ModelCheckpoint(monitor="val_mean_absolute_error", mode="min", save_best_only=True) and CustomStopper(monitor='val_loss', mode="auto"). I observed in the TensorFlow command line logs that the losses were decreasing normally (the KL loss approached 0 over time, while the Recon_loss decreased very little).

However, the performance metrics after training were very poor (I trained five devices: Fridge, Kettle, Microwave, Dishwasher, and Washer Machine). Do you have any insights or suggestions for improvement?

Thank you.
eb134eedbf1622340691dc9a06c4a840
f5285ea1ed5a1833b02dce3a5f108f98

The model architecture part was completely copied from your code, with only some necessary modifications to the latent dimensions to adapt to the input data with a window size of 512. Additionally, I used Adam with a learning rate of 3e-4 in the model.compile() section. I personally believe this should not have a significant impact on the training process.

a223a89af9c6b2148530dce3f2605b6e
cd92a417108bd80819d6f2e824750e28

REFIT

Hello, I want to run the REFIT dataset, but there is no corresponding configuration file in the code. Can you give me a solution?

eps

Dear Sir,
Thank you for sharing the codes. I m a Chinese researcher and am also researching on power disaggregation.
I tried running the codes and found some problems.
I think the problem is because of missing eps inputs when generating the dataset
Could u advice what should be the values? I think is 16 float vector
regards
qingfan

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.