iacobo / continual Goto Github PK
View Code? Open in Web Editor NEWContinual Learning of Electronic Health Records (EHR).
License: MIT License
Continual Learning of Electronic Health Records (EHR).
License: MIT License
Add Protected Access (data) and open access (code) badges from:
https://github.com/saa-osig/badges-for-open-practices
MLP / LSTM take shorter time to train than CNN / Transformer. Add early stopping to avoid overtraining, saturating.
Change strategy to base strategy inheriting from strat and earlystopping plugin.
Cannot average metrics over minibatches as is done for other metrics, since they depend on threshold. Need to calculate over all. Check e.g. MeanScore for inspiration on metric definition.
Would it make more sense to do the above, to enable more fair comparison of methods?
Hi. Thanks for the interesting work and for sharing these codes. But since there are no json files in /config
in the current repo, calling python3 main.py --train
cannot work directly. I would like to suggest providing a link to download the json files of the configurations that you have tuned and included in the paper. That would be very helpful for us to reproduce the results without re-tuning :) Thanks in advance.
This is necessary to compute th subject ID's to recover the time element.
plotting.plot_demographics()
# Secondary experiments:
########################
# Sensitivity to sequence length (4hr vs 12hr)
# Sensitivity to replay size Naive -> replay -> Cumulative
# Sensitivity to hyperparams of reg methods (Tune hyperparams over increasing number of tasks?)
# Sensitivity to number of variables (full vs Vitals only e.g.)
# Sensitivity to size of domains - e.g. white ethnicity much larger than all other groups, affect of order of sequence
Have manually edited the replay definition for now. Will need to update avalanche and do change based on training.storage_policy.
May also need to change memory buffer to n_tasks * buffer (since GEM etc use this number for experience-wise buffer sizes).
Maybe add naive with no regularization? I.e. no dropout etc, to enable clearer ablation testing of naive fine tuning and inherent regularization mechanisms vs explicit CL strategy.
Getting the following error (on GPU) with CNN runs with kernel_size in [5,7]
:
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
Need to amend code to ignore missing cols for demographic splits.
See here:
Ray Tune produces the following warnings:
INFO registry.py:66 -- Detected unknown callable for trainable. Converting to class.
WARNING experiment.py:295 -- No name detected on trainable. Using DEFAULT.
Non-fatal, but it's annoying to have these messages bloating the console output.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.