Coder Social home page Coder Social logo

mlforhealth / mimic_extract Goto Github PK

View Code? Open in Web Editor NEW
383.0 383.0 118.0 850 KB

MIMIC-Extract:A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III

License: MIT License

Python 16.19% Jupyter Notebook 82.89% Makefile 0.36% Shell 0.57%

mimic_extract's People

Contributors

bendikjohansen avatar bnestor avatar kheuton avatar mit2400 avatar mmcdermott avatar shirly1024 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mimic_extract's Issues

No object named vitals_labs in the file

Hi,

I have downloaded the hdf5 file via gcp and I was trying to run the provided notebooks but it threw an error (see the attached image)

error

I have checked that the path to the file is correct but I am unable to extract the data from the hdf5 file. Can you please guide could possibly be going wrong?

Tabinda

Missing Library

I am trying to run Baselines for Mortality and LOS prediction - GRU-D notebook but the from mmd_grud_utils import * line gives exception.

Could you please share this library ?

Number of Vital and lab feature

I read in the MIMIC-extract paper that there are 93 vital and lab features. However, when i read the appendix B, i just found about 67 features. Moreover, in thi file vitals_colnames.txt generated when extracting feature, i found that there are only 91 features.

Why there are 3 different number of features like this?

What does statics.max_hours mean? I can not get this column.

I execute 'mimic_direct_extract.py' and I got the same file as the instruction described.
I read statics information by:
statics = pd.read_hdf(LEVEL2, 'patients')
and the shape of 'statics' is (34472, 27), which lack the column of 'max_hours'.
So, I want to know what 'max_hours' mean?

Missing files

I am trying to run Baselines for Mortality and LOS prediction notebook but there are missing files.
Where can I find the below files ?

'/scratch/mmd/mimic_data/final/nogrouping_5/all_hourly_data.h5'

and

'/scratch/mmd/extraction_baselines-sklearn.pkl'

Adding a LICENSE file.

Thank you for making your code available! I was wondering if you could add a LICENSE file (e.g. MIT license) to make the terms of use clear?

Bug in mimic_direct_extract.py: 'Series' object has no attribute 'columns' on line 973

Running

python3 mimic_direct_extract.py 

fails with the following traceback:

Traceback (most recent call last):
  File "mimic_direct_extract.py", line 973, in <module>
    if N is not None: print("Notes", N.shape, N.index.names, N.columns.names)
  File "/afs/csail.mit.edu/u/a/amakelov/.conda/envs/mimic_data_extraction/lib/python3.6/site-packages/pandas/core/generic.py", line 5063, in __getattr__
    return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'columns'

I believe this is not due to something wrong with setting up the database or the materialized views. Rather, it seems that something went wrong with the internal logic of the script (seems like the script expects N to be a dataframe?).

Getting "relation "icustay_detail" does not exist" when running "make build_curated_from_psql"

Hi,

Thanks, for your detailed documentation. I've tried to follow the instructions as closely as possible, I've done "make concepts" from the original mimic repository and also successfully done "make build_concepts " from this repository. Looking at concepts in the directory of mimic data folder, I have this:

image

Also in curated folder of mimic extract folder, I only have "static_data.csv".
This is a snapshot of the error I'm getting.
image

Do you have any idea what might be the issue?

Thanks :)

REORG repository

root/
    README.md
    Makefile
    data/
        .gitignore
    notebooks/
    sql_concepts/
        .sql
        Makefile
    mimic_extract/
        resources/
        Makefile

How to get .pkl file?

I also can't find the "extraction_baselines-sklearn.pkl" file anywhere, please tell me how to get this file or where to find it, thank you so much!!!

RESULTS_PATH = '/scratch/mmd/extraction_baselines-sklearn.pkl'

"make build_concepts" does not work.

Hi!

I'm trying to install the staging branch and I keep having troubles at the early steps. I got stuck on the Step 4.

I have changed the environmental variables to:

export DBUSER=postgres
export DBNAME=mimic
export SCHEMA=mimiciii
export HOST=localhost
export DBSTRING="dbname=$DBNAME options=--search_path=$SCHEMA"

and it launches this error:

(mimic_extract_py36) user@computer:~/Documents/MIMIC_Extract-staging/utils$ make build_concepts
{ \
source setup_user_env.sh; \
[ -e ../../mimic-code/buildmimic/postgres/Makefile ] || git clone https://github.com/MIT-LCP/mimic-code/ ../../mimic-code/; \
}
{ \
source setup_user_env.sh; \
cd ../../mimic-code/concepts; \
psql -U  "" -h  -f ./make-concepts.sql; \
cd ../../MIMIC_Extract-staging/utils; \
}
psql: could not translate host name "-f" to address: Name or service not known
{ \
source ./setup_user_env.sh; \
psql -U  "" -h  -f ./niv-durations.sql; \
psql -U  "" -h  -f ./crystalloid-bolus.sql; \
psql -U  "" -h  -f ./colloid-bolus.sql; \
}
psql: could not translate host name "-f" to address: Name or service not known
psql: could not translate host name "-f" to address: Name or service not known
psql: could not translate host name "-f" to address: Name or service not known
Makefile:46: recipe for target 'build_extra_concepts' failed
make: *** [build_extra_concepts] Error 2

Does anyone have any solution? Thank you very much in advance :)

Missing en-core-web-sm for conda environment ?

Hi there,

I've tried following the README, but I'm getting an error at Step 2 where I need to create the conda environmen. It looks like en-core-web-sm is either missing or the specified version is not available?
Looking up the name on pypi.org didn't yield the name as a result.

conda env create --force -f ../mimic_extract_env_py36.yml

Collecting package metadata (repodata.json): done
Solving environment: done
Preparing transaction: done
Verifying transaction: done
Executing transaction: \ Enabling notebook extension jupyter-js-widgets/extension...
      - Validating: OK

done
Installing pip dependencies: / Ran pip subprocess with arguments:
['/home/chen/miniconda3/envs/mimic_extract_py36/bin/python', '-m', 'pip', 'install', '-U', '-r', '/home/chen/MIMIC_Extract/condaenv._xem_m0y.requirements.txt']
Pip subprocess output:
Collecting blis==0.4.1 (from -r /home/chen/MIMIC_Extract/condaenv._xem_m0y.requirements.txt (line 1))
  Using cached https://files.pythonhosted.org/packages/41/19/f95c75562d18eb27219df3a3590b911e78d131b68466ad79fdf5847eaac4/blis-0.4.1-cp36-cp36m-manylinux1_x86_64.whl
Collecting catalogue==1.0.0 (from -r /home/chen/MIMIC_Extract/condaenv._xem_m0y.requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/6c/f9/9a5658e2f56932e41eb264941f9a2cb7f3ce41a80cb36b2af6ab78e2f8af/catalogue-1.0.0-py2.py3-none-any.whl
Collecting en-core-web-sm==2.1.0 (from -r /home/chen/MIMIC_Extract/condaenv._xem_m0y.requirements.txt (line 3))

Pip subprocess error:
  ERROR: Could not find a version that satisfies the requirement en-core-web-sm==2.1.0 (from -r /home/chen/MIMIC_Extract/condaenv._xem_m0y.requirements.txt (line 3)) (from versions: none)
ERROR: No matching distribution found for en-core-web-sm==2.1.0 (from -r /home/chen/MIMIC_Extract/condaenv._xem_m0y.requirements.txt (line 3))

failed

CondaEnvException: Pip failed

ResolvePackageNotFound error when using the .yml file to setup conda environment

First of all, thanks a lot for documenting the process of working with the MIMIC-III data! I'm currently trying to setup the conda environment like it mentions on the README page but facing this error:

$ conda env create --force -f ../mimic_extract_env.yml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:
  - hdf5==1.8.17=2
  - tk==8.5.18=0
  - pycairo==1.10.0=py27_0
  - tornado==4.5.2=py27_0
  - sqlite==3.13.0=0
  - pathlib2==2.2.1=py27_0
  - zeromq==4.1.5=0
  - libgfortran==3.0.0=1
  - pexpect==4.2.1=py27_0
  - pip==9.0.1=py27_1
  - xz==5.2.3=0
  - libxcb==1.12=1
  - pcre==8.39=1
  - freetype==2.5.5=2
  - html5lib==0.9999999=py27_0
  - jinja2==2.9.6=py27_0
  - fontconfig==2.12.1=3
  - jupyter_core==4.3.0=py27_0
  - simplegeneric==0.8.1=py27_1
  - python==2.7.13=0
  - wheel==0.29.0=py27_0
  - ipython==5.3.0=py27_0
  - icu==54.1=0
  - gst-plugins-base==1.8.0=0
  - ptyprocess==0.5.1=py27_0
  - jupyter_console==5.2.0=py27_0
  - libxml2==2.9.4=0
  - readline==6.2=2
  - jpeg==9b=0
  - scipy==0.19.1=np113py27_0
  - libiconv==1.14=0
  - jupyter_client==5.1.0=py27_0
  - zlib==1.2.8=3
  - pyparsing==2.2.0=py27_0
  - entrypoints==0.2.3=py27_0
  - widgetsnbextension==3.0.2=py27_0
  - sip==4.18=py27_0
  - markupsafe==1.0=py27_0
  - dateutil==2.4.1=py27_0
  - mistune==0.7.4=py27_0
  - libpng==1.6.30=1
  - ssl_match_hostname==3.5.0.1=py27_0
  - cairo==1.14.8=0
  - qt==5.6.2=5
  - libpq==9.5.4=0
  - pytz==2017.2=py27_0
  - pyzmq==16.0.2=py27_0
  - numexpr==2.6.2=np113py27_0
  - sqlalchemy==1.1.13=py27_0
  - dbus==1.10.20=0
  - scandir==1.5=py27_0
  - enum34==1.1.6=py27_0
  - setuptools==27.2.0=py27_0
  - decorator==4.0.11=py27_0
  - pixman==0.34.0=0
  - jupyter==1.0.0=py27_3
  - ipykernel==4.6.1=py27_0
  - ipywidgets==6.0.0=py27_0
  - backports==1.0=py27_0
  - pandocfilters==1.4.2=py27_0
  - scikit-learn==0.19.0=np113py27_0
  - python-dateutil==2.6.1=py27_0
  - glib==2.50.2=1
  - get_terminal_size==1.0.0=py27_0
  - openssl==1.0.2l=0
  - expat==2.1.0=0
  - h5py==2.7.0=np113py27_0
  - mkl==2017.0.3=0
  - notebook==5.0.0=py27_0
  - path.py==10.3.1=py27_0
  - matplotlib==2.0.2=np113py27_0
  - functools32==3.2.3.2=py27_0
  - gstreamer==1.8.0=0
  - pydotplus==2.0.2=py27_0
  - libsodium==1.0.10=0
  - pyqt==5.6.0=py27_2
  - libffi==3.2.1=1
  - bleach==1.5.0=py27_0
  - prompt_toolkit==1.0.14=py27_0
  - six==1.10.0=py27_0
  - terminado==0.6=py27_0
  - nbconvert==5.2.1=py27_0
  - psycopg2==2.7.1=py27_0
  - subprocess32==3.2.7=py27_0
  - pytables==3.4.2=np113py27_0

MIMIC-Extract for MIMIC-IV

Hi,

Do you have a plan to modify MIMIC-Extract for MIMIC-IV in a short time or do you think that MIMIC-Extract can works on MIMIC-IV with minimal modifications?

Thanks in advance.

Querying lab items information

Hi there. Thanks for this great repo. I have a question here about querying itemid to construct vital and lab time-series. I noticed that when retrieving ITEM information, only one MIMIC table, d_items, is queried for ITEM-related information from the original database. I wonder if we also need d_labitems here or if the lab table is actually irrelevant. Thank you.

vitals_hourly_data includes lots of duplicates

When calculating numerics, the mimic_direct_extract.py main function pulls in data from both chartevents and labevents, but lots of these are duplicates. For example, there are 7 different itemids for White blood cell count, and lots of the data recorded is duplicated in both original spreadsheets (as described in the MIMIC documentation). So these duplicates should be removed before computing things such as count, mean and standard deviation for outputting to the vitals_hourly_data.

How to remove grouping measurements hourly

Some other baselines do not group the measurements hourly as yours, if I want to do that using your pipeline, how should I modify the code (I guess it's the file mimic_direct_extract.py), i.e. which functions do I have to look at?

Own itemid_to_variable_map

First thank you so much for this code and great baseline.
I am trying to run mimic_direct_extract.py with my own Mapping file:

new_mimic_map.xlsx

but keep getting the error:

starting db query with 3000 subjects...
db query finished after 630.865 sec
..//mimic_direct_extract.py:265: FutureWarning: Using 'rename_axis' to alter labels is deprecated. Use '.rename' instead
{'LEVEL2': 'LEVEL2', 'LEVEL1': 'LEVEL1', 'ITEMID': 'itemid'}, axis=1
Traceback (most recent call last):
File "..//mimic_direct_extract.py", line 887, in
min_percent=args['min_percent']
File "..//mimic_direct_extract.py", line 281, in save_numerics
X = X.join(var_map).join(I).set_index(['label', 'LEVEL1', 'LEVEL2'], append=True)
File "/home/rotem/anaconda3/envs/mimic_data_extraction/lib/python2.7/site-packages/pandas/core/frame.py", line 6815, in join
rsuffix=rsuffix, sort=sort)
File "/home/rotem/anaconda3/envs/mimic_data_extraction/lib/python2.7/site-packages/pandas/core/frame.py", line 6830, in _join_compat
suffixes=(lsuffix, rsuffix), sort=sort)
File "/home/rotem/anaconda3/envs/mimic_data_extraction/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 48, in merge
return op.get_result()
File "/home/rotem/anaconda3/envs/mimic_data_extraction/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 546, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "/home/rotem/anaconda3/envs/mimic_data_extraction/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 744, in _get_join_info
sort=self.sort)
File "/home/rotem/anaconda3/envs/mimic_data_extraction/lib/python2.7/site-packages/pandas/core/indexes/base.py", line 3245, in join
return_indexers=return_indexers)
File "/home/rotem/anaconda3/envs/mimic_data_extraction/lib/python2.7/site-packages/pandas/core/indexes/base.py", line 3394, in _join_multi
return_indexers=return_indexers)
File "/home/rotem/anaconda3/envs/mimic_data_extraction/lib/python2.7/site-packages/pandas/core/indexes/base.py", line 3474, in _join_level
raise NotImplementedError('Index._join_level on non-unique index '
NotImplementedError: Index._join_level on non-unique index is not implemented
Makefile:16: recipe for target 'build_curated_from_psql' failed
make: *** [build_curated_from_psql] Error 1

Can you think of something I am doing wrong with my mapping file?
Thank you again for sharing that wonderful work

Question about removing variables with high missing values

From your extraction script, by specifying min_percent, we can exclude variables with high proportions of missing values. However, in the code, when you remove columns before spliting into train/test set, does it mean you leak information from the test set to the train set? I think a correct setting should be using the train data to find columns with high missing percentage, then filter out those columns from the test set.
Anyway, I think it's just a minor detail. I just want to hear the opinion from your side.

GCP Issue: Additional permissions required to list objects in this bucket.

When I open the GCP link provided in readme.md, the warning shows that "Additional permissions required to list objects in this bucket. Ask a bucket owner to grant you 'storage.objects.list' permission"

I have already had the credential for MIMIC-III version 1.4. I wonder how can I ask for permission for the preprocessed output? Thanks!
WX20210817-192441

How to contribute?

Is there a plan to continue developing this library, and if yes are community contributions welcomed or accommodated?

Are there different versions of `all_hourly_data.h5` ?

The Baselines for Mortality and LOS prediction - Sklearn notebook has the following lines:

DATA_FILEPATH     = '/scratch/mmd/mimic_data/final/grouping_5/all_hourly_data.h5'
RAW_DATA_FILEPATH = '/scratch/mmd/mimic_data/final/nogrouping_5/all_hourly_data.h5'

What is grouping_5 vs nogrouping_5? Are there multiple versions of all_hourly_data.h5? The preprocessing scripts only generated one version of all_hourly_data.h5 (in the curated folder). Am I overlooking something?

An issue with GRUD

Hi there, thanks for the great work in building the data pipeline!

There seems to be an issue with the implementation of GRUD in this line, where r was not included as combined_r, as shown in the adapted implementation.

make build_curated_from_psql command fails

Hi,

When I run make build_curated_from_psql command, I took the error in the below. I think that I applied all steps in the README file but still, the code doesn't work. Any help is appreciated.

pandas.io.sql.DatabaseError: Execution failed on sql '
select distinct i.subject_id, i.hadm_id, i.icustay_id,
i.gender, i.admission_age as age, a.insurance,
a.deathtime, i.ethnicity, i.admission_type, s.first_careunit,
CASE when a.deathtime between i.intime and i.outtime THEN 1 ELSE 0 END AS mort_icu,
CASE when a.deathtime between i.admittime and i.dischtime THEN 1 ELSE 0 END AS mort_hosp,
i.hospital_expire_flag,
i.hospstay_seq, i.los_icu,
i.admittime, i.dischtime,
i.intime, i.outtime
FROM icustay_detail i
INNER JOIN admissions a ON i.hadm_id = a.hadm_id
INNER JOIN icustays s ON i.icustay_id = s.icustay_id
WHERE s.first_careunit NOT like 'NICU'
and i.hadm_id is not null and i.icustay_id is not null
and i.hospstay_seq = 1
and i.icustay_seq = 1
and i.admission_age >= 15
and i.los_icu >= 0.5
and (i.outtime >= (i.intime + interval '12 hours'))
and (i.outtime <= (i.intime + interval '240 hours'))
ORDER BY subject_id
LIMIT 100
;
': column i.admission_age does not exist
LINE 3: i.gender, i.admission_age as age, a.insurance,
^
HINT: Perhaps you meant to reference the column "i.admission_type".

Makefile:16: recipe for target 'build_curated_from_psql' failed
make: *** [build_curated_from_psql] Error 1

Extracting valuenum instead of value

First let me start with thanking you for the amazing work and latest updates.

I came up with a problem regarding the extraction of the "vitals_labs" (X dataframe), after trying to extract a feature that has text in his "value" column, but a number in his "valuenum" column inside "chartevents" table.

My question is why did you decide to extract the values inside the "value" column and not the "valuenum"?
And if by changing the "save_numerics" function I could extract the "valuenum" data without sabotage the DF\CODE\DATA?

Thank you

tables.exceptions.HDF5ExtError: Problems creating the Array

Hi,

I installed MIMIC_Extract and was able to do a test run with population=100 (by calling mimic_direct_extract.py). However, when I was trying to run it with the complete population, I got an error.

Here is a traceback of the error. Any idea on how to fix this? I installed the MIMIC database with Docker so MIMIC_Extract was running within a Docker container, but the storage space (>100GB left on the device) and available memory (>50GB) is not an issue here.

No known ranges for Basophils
No known ranges for pH urine
Glucose had 528 / 863595 rows cleaned:
  8 rows were strict outliers, set to np.nan
  520 rows were low valid outliers, set to 33.00
  0 rows were high valid outliers, set to 2000.00

No known ranges for Systemic Vascular Resistance
Height had 12 / 15182 rows cleaned:
  8 rows were strict outliers, set to np.nan
  0 rows were low valid outliers, set to 0.00
  4 rows were high valid outliers, set to 240.00

Sodium had 22 / 425997 rows cleaned:
  0 rows were strict outliers, set to np.nan
  20 rows were low valid outliers, set to 50.00
  2 rows were high valid outliers, set to 225.00

No known ranges for Lymphocytes ascites
Anion gap had 130 / 208219 rows cleaned:
  9 rows were strict outliers, set to np.nan
  108 rows were low valid outliers, set to 5.00
  13 rows were high valid outliers, set to 50.00

Shape of X :  (2200954, 312)
mimic_direct_extract.py:303: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
  np.save(os.path.join(outPath, subjects_filename), data['subject_id'].as_matrix())
mimic_direct_extract.py:305: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
  np.save(os.path.join(outPath, times_filename), data['max_hours'].as_matrix())
mimic_direct_extract.py:324: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
  if dynamic_filename is not None: np.save(os.path.join(outPath, dynamic_filename), X.as_matrix())
/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/tables/attributeset.py:475: NaturalNameWarning: object name is not a valid Python identifier: 'axis0_nameAggregation Function'; it does not match the pattern ``^[a-zA-Z_][a-zA-Z0-9_]*$``; you will not be able to use natural naming to access this object; using ``getattr()`` will still work, though
  check_attribute_name(name)
Traceback (most recent call last):
  File "mimic_direct_extract.py", line 922, in <module>
    min_percent=args['min_percent']
  File "mimic_direct_extract.py", line 325, in save_numerics
    if dynamic_hd5_filename is not None: X.to_hdf(os.path.join(outPath, dynamic_hd5_filename), 'X')
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/pandas/core/generic.py", line 2377, in to_hdf
    return pytables.to_hdf(path_or_buf, key, self, **kwargs)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/pandas/io/pytables.py", line 274, in to_hdf
    f(store)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/pandas/io/pytables.py", line 268, in <lambda>
    f = lambda store: store.put(key, value, **kwargs)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/pandas/io/pytables.py", line 889, in put
    self._write_to_group(key, value, append=append, **kwargs)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/pandas/io/pytables.py", line 1415, in _write_to_group
    s.write(obj=value, append=append, complib=complib, **kwargs)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/pandas/io/pytables.py", line 3022, in write
    blk.values, items=blk_items)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/pandas/io/pytables.py", line 2812, in write_array
    self._handle.create_array(self.group, key, value)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/tables/file.py", line 1168, in create_array
    track_times=track_times)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/tables/array.py", line 197, in __init__
    byteorder, _log, track_times)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/tables/leaf.py", line 290, in __init__
    super(Leaf, self).__init__(parentnode, name, _log)
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/tables/node.py", line 266, in __init__
    self._v_objectid = self._g_create()
  File "/root/miniconda3/envs/mimic_data_extraction/lib/python3.6/site-packages/tables/array.py", line 229, in _g_create
    nparr, self._v_new_title, self.atom)
  File "tables/hdf5extension.pyx", line 1297, in tables.hdf5extension.Array._create_array
tables.exceptions.HDF5ExtError: Problems creating the Array.
Job 'python mimic_direct_extract.py ...' terminated by signal SIGSEGV (Address boundary error)

Does mimic-extract start from ICU or Hospital admission?

Hi,

I run MIMIC-Extract and extract the patients first 24 hours data. However, I wonder that this 24 hour is the first 24 hours in ICU or in Hospital stay? Should I check the ICUSTAY Intime or HADM ADMITTIME for extracting patient other information?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.