mondejar / ecg-classification Goto Github PK
View Code? Open in Web Editor NEWCode for training and test machine learning classifiers on MIT-BIH Arrhyhtmia database
License: GNU General Public License v3.0
Code for training and test machine learning classifiers on MIT-BIH Arrhyhtmia database
License: GNU General Public License v3.0
Morph used: u-lbp
Error: Found array with 0 sample(s) (shape=(0, 59)) while a minimum of 1 is required by StandardScaler.
Morph used: wvlt
Error: Found array with 0 sample(s) (shape=(0, 23)) while a minimum of 1 is required by StandardScaler.
In your python version, it read the data under the scikit folder. However, there is no mention about how to generate these data. Did I miss something?
Right now I'm using single SVM with C_value = {0.001, 0.01, 0.1, 1, 10, 100} and oversampling(SMOTE) the train data, gamma_value=0.0 but the sensitivity for S and F class is very low. I read your paper and it says that Ensemble of SVMs with product rule will give good results. Can you please suggest any resources that I can use to implement this in python as I'm finding it difficult?
Best result obtained till now:
Ijk: 0.5552
Ij: 2.2895
Cohen's Kappa: 0.5380
Confusion Matrix:
[39988 1621 821 1603]
[902 693 434 21]
[ 113 16 3088 3]
[249 1 102 36]
Overall ACC: 0.8815
mean Acc: 0.9408
mean Recall: 0.5745
mean Precision: 0.4959
N:
Sens: 0.9081
Prec: 0.9694
Acc: 0.8932
SVEB:
Sens: 0.3380
Prec: 0.2973
Acc: 0.9397
VEB:
Sens: 0.9590
Prec: 0.6952
Acc: 0.9701
F:
Sens: 0.0928
Prec: 0.0216
Acc: 0.9602
I am trying to clone the repository but I can't seem to pull the python files or the ones from tensorflow.
They all appear deleted when I use the "git status" command and, when I try to restore them, one error message that comes up constantly is this one, also on the title: "fatal: cannot create directory at 'python/aux': Invalid argument".
Don't know if this has to do with anything, but I also found a similar issue online (swcarpentry/DEPRECATED-bc#463) that said "aux" is a reserved word for Windows OS.
Could you maybe check it out?
Thanks in advance!
''Two median filters are applied for this purpose, of 200-ms and 600-ms. Note that this values depend on the frequency sampling of the signal.
from scipy.signal import medfilt
...
# median_filter1D
baseline = medfilt(MLII, 71)
baseline = medfilt(baseline, 215)
''
How to compute this values according to my frequency sampling? And what's your frequency sampling?
I have tried many tricks but DS2 prediction is done all in one class like below:
[ 0 0 44033 0]
[ 0 0 2050 0]
[ 0 0 3220 0]
[ 0 0 388 0]
The prediction made on DS1 is great achieving good accuracy, but seems like a problem in DS2.
I have tried oversampling also but it did not create any difference.
Please help asap, my minor project is pending!
While running the run_train_svm.py file there is an error that occurs due to a file named load_MITBIH.py there the error which occurs is that the label given as 'w' is not found as the training set is DS_1.csv and there is no such label as w in that file.
Looking for response.
Error:
writing pickle: C:/Users/qasim/Desktop/jibran/ecg-classification-master/python/mit_db/features/w_90_90_DS1_rm_bsline_maxRR_u-lbp_MLII.p...
Traceback (most recent call last):
File "C:/Users/qasim/Desktop/jibran/ecg-classification-master/python/run_train_SVM.py", line 54, in
main(multi_mode, 90, 90, do_preprocess, use_weight_class, maxRR, use_RR, norm_RR, compute_morph, oversamp_method, pca_k, feature_selection, do_cross_val, C_value, gamma_value, reduced_DS, leads_flag)
File "C:\Users\qasim\Desktop\jibran\ecg-classification-master\python\train_SVM.py", line 165, in main
maxRR, use_RR, norm_RR, compute_morph, db_path, reduced_DS, leads_flag)
File "C:\Users\qasim\Desktop\jibran\ecg-classification-master\python\load_MITBIH.py", line 434, in load_mit_db
f = open(features_labels_name, 'w')
IOError: [Errno 2] No such file or directory: 'C:/Users/qasim/Desktop/jibran/ecg-classification-master/python/mit_db/features/w_90_90_DS1_rm_bsline_maxRR_u-lbp_MLII.p'
Runing train_SVM.py!
Loading MIT BIH arr (DS1) ...
Computing morphological features (DS1) ...
Wavelets ...
labels
writing pickle: /home/mjz/gzy/ecg-classification/python/ECG/mitbih_database/feat ures/w_90_90_DS1_rm_bsline_maxRR_wvlt_MLII.p...
Loading MIT BIH arr (DS2) ...
Computing morphological features (DS2) ...
Wavelets ...
labels
writing pickle: /home/mjz/gzy/ecg-classification/python/ECG/mitbih_database/feat ures/w_90_90_DS2_rm_bsline_maxRR_wvlt_MLII.p...
Traceback (most recent call last):
File "run_train_SVM.py", line 54, in
main(multi_mode, 90, 90, do_preprocess, use_weight_class, maxRR, use_RR, nor m_RR, compute_morph, oversamp_method, pca_k, feature_selection, do_cross_val, C_ value, gamma_value, reduced_DS, leads_flag)
File "/home/mjz/gzy/ecg-classification/python/train_SVM.py", line 195, in main
scaler.fit(tr_features)
File "/home/mjz/anaconda3/lib/python3.5/site-packages/sklearn/preprocessing/da ta.py", line 590, in fit
return self.partial_fit(X, y)
File "/home/mjz/anaconda3/lib/python3.5/site-packages/sklearn/preprocessing/da ta.py", line 612, in partial_fit
warn_on_dtype=True, estimator=self, dtype=FLOAT_DTYPES)
File "/home/mjz/anaconda3/lib/python3.5/site-packages/sklearn/utils/validation .py", line 431, in check_array
context))
ValueError: Found array with 0 sample(s) (shape=(0, 23)) while a minimum of 1 is required by StandardScaler.
When I delete all pickle files and then edit the load_MITBIH.py to include the fifth(Q) class, only 4 classes get included and not the fifth. I have also updated the evaluation_AAMI.py for confusion matrix for 5 class, but fifth row and column contains 0 always.
I'm taking window size of 180 centred at R-peak.
What should I do?
I am getting the error '[Errno 2] No such file or directory: '/kaggle/working/features/w_90_90_DS1_rm_bsline_maxRR_u-lbp_MLII.p'' . But after running the run_train_SVM.py . I am getting /kaggle/working/python_mit_rm_bsline_wL_90_wR_90_DS1.p this file created . I don't know why this file is getting created instead of the required one . Anyone pls help !!!
Actually, I want to work on all 180 features rather than extracting few of them(wvlt, HOS, etc)
So is there a way to do that?
Thanks.
I have trained model, now I want to test it on my own ecg record which is in csv format. I checked all documentation and code files but I have no luck.
While running the run_train_SVM.py, the feature values do get computed, but not sure if they are getting added .
train_SVM.py in main(multi_mode, winL, winR, do_preprocess, use_weight_class, maxRR, use_RR, norm_RR, compute_morph, oversamp_method, pca_k, feature_selection, do_cross_val, C_value, gamma_value, reduced_DS, leads_flag)
195 # scaled: zero mean unit variance ( z-score )
196 scaler = StandardScaler()
--> 197 scaler.fit(tr_features)
198 tr_features_scaled = scaler.transform(tr_features)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.