Coder Social home page Coder Social logo

dcan-labs / abcd-hcp-pipeline Goto Github PK

View Code? Open in Web Editor NEW
47.0 47.0 19.0 327 KB

bids application for processing functional MRI data, robust to scanner, acquisition and age variability.

Home Page: https://hub.docker.com/r/dcanumn/abcd-hcp-pipeline

License: BSD 3-Clause "New" or "Revised" License

Shell 2.77% Python 95.49% Dockerfile 1.74%
bids-format fmri fmri-preprocessing image-processing mri-images pipeline

abcd-hcp-pipeline's People

Contributors

arueter1 avatar audreymhoughton avatar barrytik avatar dasturge avatar dependabot[bot] avatar ericearl avatar iamdamion avatar kathy-snider avatar lucimoore avatar madisoth avatar perronea avatar rosemccollum avatar tikal004 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

abcd-hcp-pipeline's Issues

Pipeline tries to apply DWI fieldmaps to functional runs

Pipeline by default tries to apply DWI fieldmaps to functional data, causing FMRIVolume to throw dimension mismatch errors. I tried adding intendedfor sidecars to the .json for the functional fieldmaps, but this didn’t fix the problem.

For reference, deleting/moving the DWI fieldmaps from the rawdata/fmap directory seems to avoid this error (but ideally would be great to not have to do this).

This issue was identified using HBN data from public release 7.

Error in DCANBOLDProcessing

Dear DCAN-staff,
unfortunately I ran into an error at the BOLD processing stage of the pipeline. From here I can’t tell what this is about so I just copied the content of DCANBOLDProcessing_teardown.err

Subscript indices must either be real positive integers or logicals.

Error in sorter (line 11)

Error in analyses_v2 (line 27)

MATLAB:badsubscript
Subscript indices must either be real positive integers or logicals.

Error in sorter (line 11)

Error in analyses_v2 (line 27)

MATLAB:badsubscript
Subscript indices must either be real positive integers or logicals.

Error in sorter (line 11)

Error in analyses_v2 (line 27)

MATLAB:badsubscript

it's actually not happening to only a subset of the processed subjects, but all of them. So it seems something general isn't right here.
Could you please have a look on your site, whats happening at those lines? Tell me if you need any additional information.

Thank you very much in advance and all the best,
Martin

pipeline operation requires 'session' to be defined

I was trying to run the pipeline on a 'valid' BIDS case that did not have session explicitly set (created using dcm2bids w/o the "-s" option).

I was getting the following error:
Traceback (most recent call last):
File "/app/run.py", line 309, in
_cli()
File "/app/run.py", line 65, in _cli
return interface(**kwargs)
File "/app/run.py", line 221, in interface
for session in session_generator:
File "/app/helpers.py", line 64, in read_bids_dataset
anat = set_anatomicals(layout, subject, sessions)
File "/app/helpers.py", line 90, in set_anatomicals
t1w_metadata = layout.get_metadata(t1ws[0].filename)
IndexError: list index out of range

A little detective work determined that the pipeline wanted a session (specifically, seemed to be looking for 'baselineYear1Arm1'). So, recreating the BIDS with session named (using dcm2bids w/ the "-s baselineYear1Arm1" option), and things seemed to work.

In summary, it might be good to indicate in the documentation that session needs to be explicitly named in the input BIDS data.

version 0.1.0 question

Dear ABCD developers,
I am testing the newest version of abcd using the session flag. Two questions:

  1. If there are two sessions is the first one processed before the second by default?
  2. Is there anywhere in the logs or the summary that confirms pipeline version?
    Juan

Change respiration filter type with docker/singularity entry point?

Is there any existing option (using the docker/singularity image) that I am missing to change the respiration filter from notch to low pass? I see that in the DCAN bold processing script this is an argument (--motion-filter-type), but it doesn't seem accessible when using the docker/singularity image. Am I missing something?

We are trying to avoid dependency issues and would prefer to use the singularity rather than the python/matlab scripts in the dcan_bold_processing repository, if possible. I just wanted to clarify if this is possible as is, or if the docker entry script would have to be updated. Thanks.

Recommendation for QC of distortion correction?

Hello
I was just wondering if you had or could recommend a method for checking the quality/success of the bold distortion correction step? This does not appear to be shown in the executive summary QC as it is for fMRIprep - are there other files or output that would be useful to look at?
Thanks!

TypeError in reading self.bids_data['func_metadata']['PhaseEncodingDirection']

The error is:

Traceback (most recent call last):
  File "/app/run.py", line 305, in <module>
    _cli()
  File "/app/run.py", line 65, in _cli
    return interface(**kwargs)
  File "/app/run.py", line 232, in interface
    session_spec = ParameterSettings(session, out_dir)
  File "/app/pipelines.py", line 133, in __init__
    self.bids_data['func_metadata']['PhaseEncodingDirection'])
TypeError: list indices must be integers or slices, not str

The "PhaseEncodingDirection": in each case in each func sidecar JSON is "j".

I'm guessing from this the error is here:

https://github.com/DCAN-Labs/abcd-hcp-pipeline/blob/develop/app/helpers.py#L245

Query - enback eprime files

Hi,
I'm returning to abcd-hcp-pipeline after a long break, so I'm probably out of date. I have new data acquired using protocols derived from ABCD. The emotional nback used, as far as I know, the eprime implementation used in the ABCD study conducted at the same site. Are there established procedures for importing eprime to bids and processing with the pipeline?

Thanks

Add a visual walkthrough to the documentation

Proposal to make a visual walkthrough, showing inputs, input folder structure, commands to construct each of these, pulling and executing the image, and how to access visuals of the major output files and qc images. This could ideally be included in readthedocs.

"--ignore func" requires a func folder still

A subject that looks like this:

sub-ANATONLY/
|__ ses-baselineYear1Arm1
    |__ anat
        |__ sub-ANATONLY_ses-baselineYear1Arm1_rec-normalized_T1w.json
        |__ sub-ANATONLY_ses-baselineYear1Arm1_rec-normalized_T1w.nii.gz
        |__ sub-ANATONLY_ses-baselineYear1Arm1_rec-normalized_T2w.json
        |__ sub-ANATONLY_ses-baselineYear1Arm1_rec-normalized_T2w.nii.gz

Is not accepted with the --ignore func option in place because of this error:

user@server:/projects/ABCD/docker_pull_test$ docker run --rm -v /projects/ABCD/docker_pull_test/input:/input:ro -v /projects/ABCD/docker_pull_test/output:/output -v /software/freesurfer/LICENSE:/license:ro -v /projects/ABCD/docker_pull_test/ABCD_BIDS_cleaning.json:/cleaning.json:ro dcanlabs/abcd-hcp-pipeline /input /output --participant-label ANATONLY --freesurfer-license /license --ncpus 6 --custom-clean /cleaning.json --print-commands-only --ignore func
/usr/local/lib/python3.6/dist-packages/bids/layout/bids_layout.py:116: UserWarning: 'dataset_description.json' file is missing from project root. You may want to set the root path to a valid BIDS project.
  warnings.warn("'dataset_description.json' file is missing from "
Traceback (most recent call last):
  File "/app/run.py", line 304, in <module>
    _cli()
  File "/app/run.py", line 65, in _cli
    return interface(**kwargs)
  File "/app/run.py", line 216, in interface
    for session in session_generator:
  File "/app/helpers.py", line 65, in read_bids_dataset
    func = set_functionals(layout, subject, sessions)
  File "/app/helpers.py", line 119, in set_functionals
    'func_metadata': func_metadata[0]
IndexError: list index out of range

Though the exact same arguments given to a subject with anat and func folders under their BIDS input folder does not have this problem.

Also, it does not ignore additional functional arguments like --bandstop or --abcd-task.

Access rights from Docker output

Thanks for providing this great pipeline, works fine for me!

One comment, as it drives me nuts the first time, is accessing the Docker output on Linux machines. It turned out, that within Docker it run as root, but you might not be root any longer when you access your data.

To be sure about the correct rights put the flag --user "$(id -u):$(id -g)" to the docker options

so its:

docker run --rm --user "$(id -u):$(id -g)"
-v /path/to/bids_dataset:/bids_input:ro
-v /path/to/outputs:/output
-v /path/to/freesurfer/license:/license
dcanlabs/nhp-abcd-bids-pipeline /bids_input /output
--freesurfer-license=/license [OPTIONS]

as a minimal command example.

Best,
Martin

Time-series ready for correlation?

We are interested in calculating a dense connectome similar to the HCP. I know those files are quite large, so I imagine thats why they were not shared, but I was wondering if I can calculate it directly from the following file type:
sub-#/ses-#/func/sub-#_ses-#_task-(MID|nback|SST|rest)_bold_desc-filtered_timeseries.dtseries.nii

That is, have these been sufficiently processed to the point that they are ready for correlation? Do I need to incorporate motion censoring using the associated files? I dont recall the HCP stance on that.

Exact Container Call and Version for ABCC BIDS Derivatives

I'm sorry I know I asked this a while ago, but I think I forgot to save it. What was the exact call and container version for the abcd-hcp-pipeline when processing the ABCC BIDS derivatives? A colleague is looking to replicate results.

Additionally, if you have the configuration file used for DCANBoldPreProc that would be helpful!

FreeSurfer version

Is it possible to run the abcd-hcp-pipeline using the newest version of FreeSurfer?

[3165] - DCAN Labs ABCD-BIDS Community Collection (ABCC) File Availability

Hello,

I am looking to download DCAN-processed data from the NDA repository. I was wondering if these files are available for the individual subjects for download:

T1: sub/ses/files/MNINonLinear/T1w_restore_brain
BOLD Image: sub/ses/files/task-rest01/task-rest01_nonlin_norm.nii
Motion Regressor: sub/ses/files/MNINonLinear/Results/task-rest01/Movement_Regressors.txt
Another File: MNINonLinear/Results/task-rest01/task-rest01.nii.gz

Thank you very much!

How should multiple sessions be handled?

Question: I have been testing the pipeline using a single subject and single session. However, our collections are longitudinal and we preprocess as scans come in. I see that there's an argument to process all sessions for a subject as one. However, the help doesn't show an argument to specify a single session and it's unclear how multiple sessions are handled if they are slowly being added.

If one session is processed and then a new session is BIDS sorted and added, if the pipeline is run again for that subject, will it check for outputs, skip the already processed session, and only process the new session? Or will it try to reprocess all ses- folders under that sub- folder?

If the latter, how do you suggest we handle our workflow using the DCAN pipeline (processing only a single session)? Thanks.

Adding session flag

Hello, for longitudinal study designs it would be helpful of the abcd-hcp-pipeline allowed the user to specify a participant ("sub-") and session ("ses-") label.

Fails without fieldmaps when DWI files are present

I am able to process data that doesn't have fieldmaps if no DWI data is present. If DWI data is present and there are no fieldmaps, it fails looking for fmap metadata.

Note: --ignore dwi did not solve this problem

Fails for anat only data when --ignore func flag isn't specified

If you have data that contains some subjects with functional data and some without, you cannot submit one run command for the pipeline. You have submit one command for the subjects with functional data and one for the subjects without that includes the --ignore func flag.

If the pipeline could check what is in the subject directory and default to --ignore func for subjects without functional data, that would be ideal.

Pipeline located at /pipeline instead of /opt/pipeline

user@server:/projects/ABCD/docker_pull_test$ docker run --rm -it --entrypoint bash dcanlabs/abcd-hcp-pipeline
root@dockerinstance:/# ls -d /opt/pipeline /pipeline
ls: cannot access '/opt/pipeline': No such file or directory
/pipeline

The app seems to expect it at /opt/pipeline:

user@server:/projects/ABCD/docker_pull_test$ docker run ...

...

running PreFreeSurfer
/opt/pipeline/PreFreeSurfer/PreFreeSurferPipeline.sh \

...

Traceback (most recent call last):
  File "/app/run.py", line 304, in <module>
    _cli()
  File "/app/run.py", line 65, in _cli
    return interface(**kwargs)
  File "/app/run.py", line 300, in interface
    stage.run(ncpus)
  File "/app/pipelines.py", line 566, in run
    result = self.call(cmd, out_log, err_log, num_threads=ncpus)
  File "/app/pipelines.py", line 574, in call
    return _call(*args, **kwargs)
  File "/app/pipelines.py", line 990, in _call
    result = subprocess.call(cmd.split(), stdout=out, stderr=err, env=env)
  File "/usr/lib/python3.6/subprocess.py", line 267, in call
    with Popen(*popenargs, **kwargs) as p:
  File "/usr/lib/python3.6/subprocess.py", line 709, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.6/subprocess.py", line 1344, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/opt/pipeline/PreFreeSurfer/PreFreeSurferPipeline.sh': '/opt/pipeline/PreFreeSurfer/PreFreeSurferPipeline.sh'

Singularity image

Hello,
In the README, you mention that you can send a singularity image if asked. We need the singularity recipe for a client at ICS Penn State. Would it be possible for you to help us with that?
Thank you.

Refactor Dockerfile to be consistent w/ nhp-abcd-bids-pipeline, infant-abcd-bids-pipeline

After merge of #39 , we should be able to refactor the Dockerfile and use the same "layered" build approach as nhp-abcd-bids-pipeline and infant-abcd-bids-pipeline:

top layer: DCAN-HCP + BIDS wrapper + python
mid layer: DCAN internal-tools (dcan_bold_proc, executivesummary, customclean etc.)
bottom layer: external-software (Ubuntu, FSL, FreeSurfer, Workbench etc.)

Standardizing both the Docker build process and the versions of dependencies shared across pipelines will be a major plus for usability and maintainability.

Allow for lack of ImageOrientationPatientDICOM

ImageOrientationPatientDICOM and InPlanePhaseEncodingDirectionDICOM are used in conjunction to get the readout direction of a subject.

These are probably exclusively put in by dcm2bids, and we should allow for two cases:

one, change the logic in the pipeline so these fields are only used when a fieldmap is present.
two, allow for an explicit readout direction input, as in the original HCP bids application.

Error Reporting: No BIDS input

The pipeline throws an unhelpful python index error when the input BIDS dataset is not found, we should develop good error reporting for basic possible missing files from the input structure, or when no input BIDS is identified at all.

No module named numpy

Error in PreFreeSurfer:

user@server:/projects/ABCD/docker_pull_test$ tail output/sub-ANATFUNC/ses-baselineYear1Arm1/logs/PreFreeSurfer/PreFreeSurfer.out
Mon Feb 25 00:18:48 UTC 2019 - PreFreeSurferPipeline.sh - NOT PERFORMING GRADIENT DISTORTION CORRECTION
Mon Feb 25 01:46:27 UTC 2019 - PreFreeSurferPipeline.sh - Not Averaging T1w Images
Mon Feb 25 01:46:27 UTC 2019 - PreFreeSurferPipeline.sh - ONLY ONE AVERAGE FOUND: COPYING
Mon Feb 25 01:48:36 UTC 2019 - PreFreeSurferPipeline.sh - Aligning T1w image to 0.7mm MNI T1wTemplate to create native volume space
Mon Feb 25 01:48:36 UTC 2019 - PreFreeSurferPipeline.sh - mkdir -p /output/sub-ANATFUNC/ses-baselineYear1Arm1/files/T1w/ACPCAlignment

 START: ACPCAlignment
Final FOV is:
0.000000 176.000000 0.000000 256.000000 69.000000 150.000000

user@server:/projects/ABCD/docker_pull_test$ cat output/sub-ANATFUNC/ses-baselineYear1Arm1/logs/PreFreeSurfer/PreFreeSurfer.err
Traceback (most recent call last):
  File "/opt/fsl/bin/aff2rigid", line 76, in <module>
    from numpy import *
ImportError: No module named numpy

freesurfer output

Is it possible to make the direct freesurfer output available for the ABCD data? Ive checked out the filemapper and it looks like those files are just skipped over (presumably bc they are intermediary for final output files). Is space an issue for that?

Error in executive summary

Hi,
I have the following error:

logs/ExecutiveSummary/ExecutiveSummary.err


Traceback (most recent call last):
  File "/opt/dcan-tools/executivesummary/summary_tools/layout_builder.py", line 1519, in <module>
    main()
  File "/opt/dcan-tools/executivesummary/summary_tools/layout_builder.py", line 1454, in main
    series_panel = write_series_panel_row(series_rows[:6])
  File "/opt/dcan-tools/executivesummary/summary_tools/layout_builder.py", line 728, in write_series_panel_row
    'nonlin_norm'  : list_of_img_paths[5]}
IndexError: list index out of range
An ERROR occurred in line 355 of file /opt/dcan-tools/executivesummary/summary_tools/executivesummary_wrapper.sh.

Tail of the .out:

./img/DVARS_and_FD_task-rest01.png
Making series panel for task-rest01
./img/DVARS_and_FD_task-enback101.png
Writing series panel but series type is unknown; program exiting...
./img/postreg_DVARS_and_FD_task-enback101.png
Writing series panel but series type is unknown; program exiting...
./img/rpb21jdr00011_task-enback101_in_t1.gif
Writing series panel but series type is unknown; program exiting...
./img/rpb21jdr00011_t1_in_task-enback101.gif
Writing series panel but series type is unknown; program exiting...
./img/task-enback101_sbref.png
Writing series panel but series type is unknown; program exiting...
Making series panel for Unknown

Error in Processing

The error i got while processing my data is in BIDS format @ericearl
Traceback (most recent call last):
File "/app/run.py", line 364, in
_cli()
File "/app/run.py", line 67, in _cli
return interface(**kwargs)
File "/app/run.py", line 263, in interface
session_spec = ParameterSettings(session, out_dir)
File "/app/pipelines.py", line 106, in init
self.bids_data['t1w_metadata'])
File "/app/helpers.py", line 195, in get_realdwelltime
pBW = metadata['PixelBandwidth']
KeyError: 'PixelBandwidth'
This is the error message i got, I think i can understand that there could be an issue with my input(session not properly assigned in the BIDS data format, but it is the pBW or pixel Bandwidth error i am unable to know or fix. any help or suggestion would be appreciated.

Dockerfile update

artful aardvark is no longer supported, so we probably want to swap over to the newer Ubuntu 18.04 LTS.

Here are some of the changes we needed to make to base libraries to handle everything:

  • install perl 5.20.3
  • install libnetcf
  • fix libz so
  • fix libstdc++6 so

perl has to do with freesurfer issues, but I don't recall where the other libraries were causing hiccups.

handle more possible DFM configurations

currently the develop branch only supports a pair of magnitude images and a phasediff image. The only metadata it needs is the difference in Echo Times for the two images. That metadata should ideally come from any possible source.

Files not available for download

Sorry if this is the wrong place for this comment. We are downloading files from Collection 3165 - DCAN Labs ABCD-BIDS, and we have alot of instances of files that do not download (for example, event timing files, or ABCD-HCP processed MNI bold images); we also have alot of participants that completely download as expected. I dont think its an issue with the downloader, and I imagine the participants made available in the collection passed the expected output stage. Maybe some links in the manifest are incorrect? I have a text file for each file type with the failed subjects if it would help give some examples of the problem.

processing with phase/magnitude fmaps only

Errors I'm getting when I'm trying to run the pipeline with only 'magnitude1', 'magnitude2', 'phase1', and 'phase2' fieldmaps (I do not have epi fieldmaps I can use):

Traceback (most recent call last):
  File "/app/run.py", line 364, in <module>
    _cli()
  File "/app/run.py", line 67, in _cli
    return interface(**kwargs)
  File "/app/run.py", line 263, in interface
    session_spec = ParameterSettings(session, out_dir)
  File "/app/pipelines.py", line 147, in __init__
    types = self.bids_data['fmap'].keys()
AttributeError: 'list' object has no attribute 'keys'

This was noted while debugging that might help (ignore if they is not helpful):

helpers.py line 189: there needs to be cases added for magnitude/phase here

bug: BIDS App expects AcquisitionMatrixPE metadata in get_realdwelltime function, even in the absence of fieldmaps

A bug was found that is preventing the BIDS app to run on the ABIDE dataset. The information in the json files for the ABIDE data is different than that of the ABCD dataset. The ABIDE json files are organized using "AcquisitionMatrix" where as the ABCD json files uses "AcquisitionMatrixPE".

Example for ABIDE
"AcquisitionMatrix": "80x64",

Example for ABCD
"AcquisitionMatrixPE": 90,

Due to this difference the BIDS app can not be run on the ABIDE dataset. The ABIDE json files may need to be converted, or the script can be adjusted in order to compensate the difference of the json files.

Changes to README "Options" section (from review 2022-01-12)

Options section / --help message

  • Add a brief explanation of each stage listed in the --stage flag description
  • Make --version number reporting a part of the default output in the log files?
  • Specify that --check-outputs-only only checks for the outputs listed in app/pipeline_expected_outputs.json (and link that file)
    • For example, that file in this repo currently excludes T2s, so --check-outputs-only would not check for T2s

DCANBOLD processing display task-res instead of task-rest in output folder

I ran the ABIDE BIDS data through the BIDS app and noticed errors in the functional data.

Concatenating task-rest1 to ../../ses-None/files/MNINonLinear/Results/task-res_DCANBOLDProc_v4.0.0_Atlas.dtseries.nii

BIDS_ABIDEI/NYU/sub-*/func/
├── sub-*_task-rest_bold.json
└── sub-*_task-rest_run-1_bold.nii.gz

Updates to README Additional Information (from review 2022-01-12)

Outputs subsection

  • Update the output directory tree
    • E.g. executive_summary is now in a different place, named differently, has different contents, etc.
    • Include T2 outputs in the tree

Rerunning subsection

  • Either move the one sentence to somewhere more relevant or add an example of its usefulness

Special Pipelines subsection

  • Clarify that this means special options, not a special different pipelines
  • Remove the point about the abcd-task module and replace with a reference/link to Greg's and Anthony's abcd-bids-tfmri-pipeline

Misc subsection

  • Mention that the pipeline does not(?) currently run with gradient field maps
  • Add link to BIDS Spec for PEPolar field maps
  • Clarify whether PEPolar field maps are the only BIDS-valid kind recognized by the pipeline?

Some Current Limitations subsection

"We're gonna punt on everything that was described in these first 3 paragraphs" -Thomas

  • Replace part about diffusion field maps and DWI processing with a link to QSIPrep
  • Change "automatic reading of physio data from BIDS format has not been implemented" to "will never be implemented"?
    • Add a link to movement regressors power plots and DCANBoldProc so readers know where to get that functionality?
  • Delete the word "currently" from "Software does not currently support" in last paragraph, and also remove the last sentence (which is confusing and may not even be true)

Add to beginning of README (from review 2022-01-12)

  • Add link to DCAN Labs' modified HCP Pipeline in the first sentence of the README
  • Add instructions for running the Python code directly without Singularity or Docker, or warnings that it may not work correctly?
  • FreeSurfer clarification
    • Add 1 sentence saying that you do not need to download a copy of FreeSurfer (as long as you run a containerized version)
    • Remove the phrase "via a container"
  • Add warning that the pipeline does not recognize every possible BIDS configuration, with examples of
    1. Labels that the pipeline will just ignore, and
    2. Labels that will break the pipeline (e.g. using the acq label to distinguish different resolutions in the same func folder)?

Singularity incompatibility

Singularity containers cannot have writable contents. The freesurfer license file will need to be accepted in another manner, by setting the FS_LICENSE environment variable.

multiple sessions

I am trying to pre-process a single subject with two sessions. The first session has anat, func and fmaps, the second has func and fmaps. The fmaps and func at ses-1 and ses-2 are the exact same series. I have been unable to run the abcd-hcp preprocessing pipelines on both sessions. I think it has to do with the syntax of the func names but am unsure. I tried -all-sessions and it did not work either. Does anyone have any experience in this issue?
Thanks

accept physio data in bids format

bids format physio data is very simple to read, we just need some logic for calculating proper bandstop parameters from a physio file.

DCANBOLDProcessing Step Error (possibly file naming issue?)

First, I am using the current docker image of the pipeline pulled from your docker hub about 4 weeks ago. (The -v says "abcd-hcp-pipeline 0.0.1" but I think the file just wasn't updated. According to the docker hub I believe it's version 0.0.3?)

I have tested and QA'd outputs all steps up to the DCANBOLD step. for the DCANBOLD step, I believe a file naming issue is creating a series of file not found errors. Here is an example of the error (this is repeated for all functional/task scans) from the DCANBOLDProcessing_teardown.err log):

While running:
/opt/workbench/bin_linux64/../exe_linux64/wb_command -cifti-parcellate /output/sub-MSCPI13/ses-20190223/files/MNINonLinear/Results/task-rest_DCANBOLDProc_v4.0.0_Atlas.dtseries.nii /opt/dcan-tools/dcan_bold_proc/templates/parcellations/Gordon/fsLR/Gordon.32k_fs_LR.dlabel.nii COLUMN /output/sub-MSCPI13/ses-20190223/files/MNINonLinear/Results/task-rest_DCANBOLDProc_v4.0.0_Gordon.ptseries.nii

ERROR: failed to open file '/output/sub-MSCPI13/ses-20190223/files/MNINonLinear/Results/task-rest_DCANBOLDProc_v4.0.0_Atlas.dtseries.nii', file does not exist, or folder permissions prevent seeing it

Within that files/MNINonLinear/Results/ folder, I have one dtseries for each task, however, they are named with an extra zero as such:

task-rest0_DCANBOLDProc_v4.0.0_Atlas.dtseries.nii

I believe that zero in the file name is why this step is failing, but am unsure why it's being written with that zero. If someone could look into this I'd appreciate it. Please also let me know if I should upload any other logs or info to help identify what's happening.

I think I'm left with fully pre-processed and concatenated dtseries files....but all the timeseries extraction, etc steps are failing. My summary folder has all the motion files, but the analyses_V2 subfolders are all empty.

Analyses_v2 timecourses overwriting output csv (I believe)

I am using the most recent docker image from dcanumn/abcd-hcp-pipeline on docker hub. All seems to be working well in general, but I do see some weird behavior in the timecourses that are generated. This is not a pressing issue at all, but maybe something to look into.

What happened is we have two tasks and rest that we collected. There is onyl one csv per parcellation and the automatically created .csv had far fewer rows than TR in the concatenated rest. (I assumed it was only providing timecourse for the rest scans.) But when I looked into it and compared values on the ptseries to the csv, the values were from our saccade task, which is alphabetically last.

I think that the timecourses might be getting overwritten (and there is no task-specific naming in the file name). Example: Gordon_subcorticals.csv, Gordon.csv, etc.

Not an actual issue as we didn't even expect timecourses or any of those "extra" steps to be included in the pipeline, but I figured I'd let you know this is happening.

DCAN BOLD Processing Derivatives for Seed-Based Analysis

Hello,

I have preprocessed pediatric resting state data using the DCAN pipeline. I am looking to take the BOLD functional image for each subject and upload them into CONN toolbox to conduct a seed-based analysis.

My questions are:

  1. Which derivative of the DCAN preprocessing is the BOLD functional file for each subject?
  2. Would you recommend any other processing steps before first-level processing (i.e. smoothing, denoising/band-pass filtering, etc.) within CONN Toolbox?

Thank you very much!
Mariam

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.