Coder Social home page Coder Social logo

yfzhang114 / onenet Goto Github PK

View Code? Open in Web Editor NEW
84.0 1.0 14.0 855 KB

This is an official PyTorch implementation of the NeurIPS 2023 paper 《OneNet: Enhancing Time Series Forecasting Models under Concept Drift by Online Ensembling》

Python 99.54% Shell 0.46%
concept-drift machine-learning online-learning time-series-analysis

onenet's Introduction

Hi there, nice to meet you! 👋

My GitHub Stats

onenet's People

Contributors

qingsongedu avatar yfzhang114 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

onenet's Issues

Some basic questions to ask

Hello, I am a freshman. I have read your literature and main code and have some basic questions :

  1. May I ask if in your experiments, you include a comparison between the results of applying online learning and not applying online learning?
  2. for mitigating conceptual drift, did you try in your experiments: the comparison of periodically updating the model after training then predicting and direct prediction without updating the model?
    I hope you could discuss this problem with me.
    Looking forward to your reply. Thank you.

upload files

Your work is excellent and very helpful to me. Can you upload the MOE and Average files in the ensembling method?

Regarding the results of the paper

Thank you for your contribution. But when I tried to reproduce your code multiple times, I got results different from the article. When the prediction step size was 1 in the ETTh2 dataset, I got a result of 0.660 0.435. When the prediction step size was 24 in the WTH dataset, I got a result of 0.192 0.272. When I ran it on the ECL dataset, I got a result of 2.394 0.273, 2.324 0.360, 3.089 0.382. What is the reason?

Do you notice the phenomenon of memory leak in the code of FSNet

I've fully experimented with their code taking up CPU memory issues. It is evident that the CPU memory decreases gradually when the program is running. Fortunately, their process will not be killed because the datasets in their experiment are too small.
However, I tested their code on a dataset with 100,000 entries. And the process was killed when we got to one tenth of the way through the test because of memory leak.
Long data series are not uncommon in the real world of online learning, and online learning is geared towards applications in the display world. Therefore, we are supposed to take this issue seriously.
I have asked the author of the FSNet but got no reply. Considering the in-depth research you have conducted on time series online learning, I hope you could discuss this problem with me.
Looking forward to your reply.

The dataset division of ETTm1

I notice that ETTm1 is divided in the same way as ETTh2 in your code and FSNet. However, the observations of them are recorded hourly and in 15-minutes intervals respectively, which means you only used a quarter of the data from ETTm1 for your experiment. I think that's unreasonable.

BLEXI3864U)15V9L3R%)IBM

A hyperparameter setting question about d3a

image
image
I noticed that the hyperparameter var_weight is not set when using the d3a script, which results in the augmented data being the same as the unaugmented data when doing data augmentation?Looking forward to it!Looking forward to your answer

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.