Coder Social home page Coder Social logo

ecg-viewer's Introduction

ECG Viewer - Opens and manipulates raw ECG data

Author: Dakota Williams, [email protected]

Table of Contents

  1. Setup
    1. Prerequisites
    2. Compilation
    3. Running
  2. Workflow
    1. Opening
    2. Filtering
      1. Detrending
      2. Denoising
    3. Marking Bad Leads
    4. Annotations
    5. Exporting
      1. Data
      2. Bad Leads
      3. Annotations
  3. Acknowledgements

1. Setup [top]

1.1. Prerequisites [top]

To run this application, a Java Runtime Environment (JRE) version 1.6 or higher is required. This program is platform-independent thanks to the Java Virtual Machine, meaning this application is not dependent on the client operating system. If plugin development is desired, then a Java Development Kit (JDK) version 1.6 or higher is also needed. For more information about plugins, see section 2.1 of this document. If compilation of the source code is necessary, then a JDK version 1.6 or higher and GNU make is also needed. If compilation is not necessary, skip section 1.2. A breakdown of requirements and dependencies are shown in the table below.

Prerequisite General Use Plugin Development Application Development
JRE ✔️ ✔️ ✔️
JDK ✔️ ✔️
make ✔️

The latest JDK and JRE are available here.

Getting make:

  • Windows - Either MinGW (just alias mingw32-make to make) or Cygwin and make sure that java and javac are in %PATH%.
  • OS X - Install Xcode, specifically its developer tools.
  • Linux - Lucky you! You have it already!

1.2. Compilation [top]

Compiling the source code is as simple as running make release. This creates a folder called ECGViewer in the source directory containing the jar and folders contains the libraries and plugins. During active development there are other targets for the makefile:

  • No target or default: Just running make will compile all the source files into class files in the current directory.
  • run: This will run the compiled files with the classpath set to include necessary libraries.
  • debug: This will start jdb with the necessary classpath.
  • clean: This will delete the compiled class files from the source directory (note: not the plugin directory).
  • realclean: Does the same thing as clean as well as removing the ECGViewer directory entirely.

1.3. Running [top]

To run the program, either double click on ECGViewer/ECGViewer.jar to execute it, or, from command line, run the command java -jar ECGViewer/ECGViewer.jar.

2. Workflow [top]

This section goes through a sample workflow of processing a dataset. To begin, run the program with one of the prescribed methods in section 1.3. The program should look like this:

2.1. Opening [top]

There are two options for opening a file, opening the whole file and opening a subset of a file. To open a whole file, go to File->Open... which will present a dialog like this:

Opening a subset of a file is a bit different. Using File->Open Subset... will show a dialog

like this. The two text boxes on the left side of the dialog specify the time into the dataset that begins the subset and how long the subset is, respectively. Both of these times are measured in milliseconds.

Currently, the supported file types include .dat, .123, 64-lead .txt, and .csv (exported from this application) files. For a more in depth analysis of these files, see plugins/DATFile.java and plugins/_123File.java. More file types can be read in by creating plugins. For more information on creating plugins, see plugins/README.

After loading the file, the main window should display the leads as graphs.

2.2. Filtering [top]

Applying a filter can be done two ways: To an individual lead, or to all the leads at once. Applying a filter to all the leads in done in the main window from the Filter All menu. Selecting a method from there will display a preview dialog of a single lead and parameter sliders for that filter. After clicking OK, the filter will be applied to all leads.

The other way to apply a filter is to one lead, individually. Clicking on a graph will produce a window as shown below.

Controls for navigating the graph:

  • Click-n-drag (upper-left to lower-right): Zoom in on area
  • Click-n-drag (lower-right to upper-left): Zoom out
  • Ctrl + click-n-drag: Pan

The lower text boxes allow for manual focus of the graph. Start offset sets the where the left side of the graph is aligned. Length sets how much the x-axis contains.

The menu Filter on this lead window has all of the filter options available to filter just this lead.

2.2.1. Detrending [top]

The main purpose of detrending is to bring the data to a baseline so it can accurately represent the data. The detrending options can be found on the first half of the Filter menu, before the separator.

  • Detrend: A polynomial fit, the order of the polynomial is asked for in the dialog.
  • Constant Offset: Shifts the entire signal by a constant value.

2.2.2. Denoising [top]

Denoising a signal removes extraneous data and leaves the more important parts of the signal. The denoising options are found on the second half of the Filter menu, after the separator. Some solid options include:

  • Savitsky-Golay filter: Does a good job at smoothing the signal, however the morphology may change.
  • FFT: Has a sweet spot for minimizing noise, however if too low, the filter may lose information.
  • Wavelet: Solid all-around choice.
  • Butterworth: Finicky, but works well.

2.3. Marking Bad Leads [top]

Picking out bad leads must be done individually to each lead. To do so, click on the lead's graph in the main window to open that lead's window. On that lead's menu, click Dataset->Bad Lead. Doing so will set the background of that lead to red, indicating it is bad.

There is an option to interpolate bad leads with their direct neighbors. In the settings panel (Main Window File->Settings), check Interpolate Bad Leads? to activate this functionality.

2.4. Annotations [top]

Annotations are used to mark places of interest in the signal. Four designated titles currently exist:

  1. P-wave,
  2. QRS-complex,
  3. R-wave,
  4. T-wave

They represent the major features in a single beat. Annotations are placed on an individual lead, but apply to all leads. For example, if an annotation is set on lead a, then on lead b, the same annotations appear.

Placing an annotation is a two-step procedure. First, make sure Annotation->Place Annotations is checked. This disables the graph navigation function and enables placing annotations. Second, selecting the correct type of annotation from the Annotation menu and clicking on the graph where the annotation should be placed will add a new annotation. To enable graph navigation, uncheck Annotation->Place Annotations.

There is a feature to auto annotate the R-waves (highest peaks) in the signal. To use this, click Annotation->Find R-Waves.

To clear all annotations, use Annotation->Clear.

Customizing annotation colors can be set from the settings panel.

2.5. Exporting [top]

Exporting data allows it to be used in other programs and routines not a part of this application. All export methods can be found in the File menu of the main window.

2.5.1. Data [top]

Like opening data, exporting data can be done two ways: All at once, or just a subset. To export all of the data, use File->Export.... To export just a subset of the data, use File-Export Subset.... The subset bounds are defined by the two text boxes at the bottom of the main window. In both of these methods, the file format must be chosen from the drop-down in the export dialog.

Currently, there is one way of exporting data. The data is exported into a file of comma separated values (CSV), where first row is the lead number, and each subsequent row is the time followed by the value for that lead at that time. Even though the drop-down provides a MATLAB matrix option, do not use it since it does not work.

2.5.2. Bad Leads [top]

Exporting bad leads is done by File->Export Bad Leads.... The file output is all of the lead numbers of the bad leads, one per line.

2.5.3. Annotations [top]

Exporting annotations is invoked by File->Export Annotations.... Each annotation is associated with a number in order of how they appear in the menu (P-wave -> 0, QRS -> 1, etc.). These numbers are stored with there temporal position in the signal, one per line.

For example, an R-wave (3rd in menu -> 2) annotation was placed at time 1000. When the annotations are exported, then the file would have a line that looks like 2.0 1000.0.

3. Acknowledgements [top]

This work is supported by the National Science Foundation CAREER Award #ACI-1350374

Libraries Used

  1. JFreeChart v1.0.19 - unmodified
  2. Apache Commons Math v3.3 - unmodified
  3. savitzky-golay-filter - modified
  4. SGFilter.java was updated to resolve deprecated dependencies on Apache Commons Math. A copy of the Apache license is contained in the sgfilter directory
  5. JWave - unmodified
  6. Java Look and Feel Graphics Repository - unmodified

ecg-viewer's People

Contributors

benwrk avatar mje10 avatar raineforest avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ecg-viewer's Issues

Subsets of large files still take as much memory as large files

See #5 for more background on the memory issues related to this application.

Problem: Although optimizing the program to handle files several magnitudes larger than before is a hefty challenge, the program already contains some implementations for reading in smaller subsets of files for graphing. These should take only the necessary memory, but as of now, they still require storing the entire data file in memory.

The following is a stack trace for the methods called (and the subset methods implemented) when the user loads a subset of a file:

  • public MainFrame(final ECGViewHandler views) { called when the app starts, adds action listener }
  • public void actionPerformed(ActionEvent e) { listens for the user pressing the "Load Subset" menu option; after they make their selection, calls loadFileSubset on the ECGViewHandler, includes start and end time }
  • public void ECGViewHandler.loadFileSubset(String file, double start, double end) { calls readSubsetData on its ECGModel, includes filename, start, and end }
  • public void ECGModel.readSubsetData(String filename, double start, double end) { calls own readData with filename, start, and length to read, THEN calls ECGDataSet.subset on each item (channel?) in its points with start and end }
    • public void ECGModel.readData(String filename, double start, double length) { creates file reader, calls read on it with start, length, and an ArrayList where read will put the values; filters out bad channels and copies the remaining data }
    • public ECGDataSet subset(double start, double end) { makes a copy of the data that includes only the necessary points by checking the first value in each double[], which represents the time }
  • public int ECGFile.read(String fileName, double start, double length, ArrayList<AbstractMap.SimpleEntry<Double, ArrayList>> points) { reads in the entire file; each record, each tuple, each value; and stores it into points }

Possible solution: Since all of these arraylists are unordered, and instead use their timestamp as the key, maybe the DATFile and other plugins can just skip over records that don't fall within the timestamp

Unable to open large files: Memory Inefficiency/Specification Change

Background: The program was initially intended to deal with small ECG data files, on the magnitude of 10-100 MB. These files work as intended. However, now it is being asked to process much larger files, on the order of 1-2 GB.

Problem: The program's current implementation utilizes at least as much RAM as the file size in order to perfectly graph the file, and likely uses several times as much. Therefore, the JVM runs out of memory before it finishes reading in the file. It will then fail silently in order to allow users to try a different operation.

Possible Solutions:

  1. Increase the amount of RAM available to the JVM. By adding the -Xmx argument, you can allow the JVM to use more heap space than is usually allocated. Although it would certainly allow larger files, there is no great way to determine exactly how effective this method is. If you believe your files are right on the edge of what's possible, this might be worth a shot.

  2. Process data channel by channel. This is one way to split up the work so that not all data would be loaded at once, but each image could still be formed from a single array using the graphing software. However, the downsides are that in order to apply the operations that the program offers, we would need to reload all of the data instead of working from live memory.

  3. Decrease graphing precision. On large files, it is unlikely that researchers are looking for a point-to-point accurate graph on the timescales that these large files record. Therefore, significant improvements could be made by quantizing the data into a smaller number of chunks that could each be graphed with one point but cover a large number of data points in the original set.

  4. Preprocessing. By creating smaller files with parts of the data, the program could divide its work into sections and do one section at a time. Could be slower to load like 2.

  5. Refactor internal data structures. The program was not originally written to handle these large files efficiently, and changes could be made to improve how the data is manipulated.

At this time, there is no plan to implement any of the solutions mentioned here. Focus will be kept on making sure you can still open small parts of large files. Although most of the discussion concerning this issue will take place internally in CBLRIT, open-source comments and contributions are welcome.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.