Coder Social home page Coder Social logo

fastsurfer's Introduction

DOI

Open In Colab Open In Colab

Welcome to FastSurfer!

Overview

This README contains all information needed to run FastSurfer - a fast and accurate deep-learning based neuroimaging pipeline. FastSurfer provides a fully compatible FreeSurfer alternative for volumetric analysis (within minutes) and surface-based thickness analysis (within only around 1h run time). FastSurfer is transitioning to sub-millimeter resolution support throughout the pipeline.

The FastSurfer pipeline consists of two main parts for segmentation and surface reconstruction.

  • the segmentation sub-pipeline (seg) employs advanced deep learning networks for fast, accurate segmentation and volumetric calculation of the whole brain and selected substructures.
  • the surface sub-pipeline (recon-surf) reconstructs cortical surfaces, maps cortical labels and performs a traditional point-wise and ROI thickness analysis.

Segmentation Modules

  • approximately 5 minutes (GPU), --seg_only only runs this part.

Modules (all run by default):

  1. asegdkt: FastSurferVINN for whole brain segmentation (deactivate with --no_asegdkt)
    • the core, outputs anatomical segmentation and cortical parcellation and statistics of 95 classes, mimics FreeSurfer’s DKTatlas.
    • requires a T1w image (notes on input images), supports high-res (up to 0.7mm, experimental beyond that).
    • performs bias-field correction and calculates volume statistics corrected for partial volume effects (skipped if --no_biasfield is passed).
  2. cereb: CerebNet for cerebellum sub-segmentation (deactivate with --no_cereb)
    • requires asegdkt_segfile, outputs cerebellar sub-segmentation with detailed WM/GM delineation.
    • requires a T1w image (notes on input images), which will be resampled to 1mm isotropic images (no native high-res support).
    • calculates volume statistics corrected for partial volume effects (skipped if --no_biasfield is passed).
  3. hypothal: HypVINN for hypothalamus subsegmentation (deactivate with no_hypothal)
    • outputs a hypothalamic subsegmentation including 3rd ventricle, c. mammilare, fornix and optic tracts.
    • a T1w image is highly recommended (notes on input images), supports high-res (up to 0.7mm, but experimental beyond that).
    • allows the additional passing of a T2w image with --t2 <path>, which will be registered to the T1w image (see --reg_mode option).
    • calculates volume statistics corrected for partial volume effects based on the T1w image (skipped if --no_bias_field is passed).

Surface reconstruction

  • approximately 60-90 minutes, --surf_only runs only the surface part.
  • supports high-resolution images (up to 0.7mm, experimental beyond that).

Requirements to input images

All pipeline parts and modules require good quality MRI images, preferably from a 3T MR scanner. FastSurfer expects a similar image quality as FreeSurfer, so what works with FreeSurfer should also work with FastSurfer. Notwithstanding module-specific limitations, resolution should be between 1mm and 0.7mm isotropic (slice thickness should not exceed 1.5mm). Preferred sequence is Siemens MPRAGE or multi-echo MPRAGE. GE SPGR should also work. See --vox_size flag for high-res behaviour.

Getting started

Installation

There are two ways to run FastSurfer (links are to installation instructions):

  1. In a container (Singularity or Docker) (OS: Linux, Windows, MacOS on Intel),
  2. As a native install (all OS for segmentation part).

We recommended you use Singularity or Docker on a Linux host system with a GPU. The images we provide on DockerHub conveniently include everything needed for FastSurfer. You will also need a FreeSurfer license file for the Surface pipeline. We have detailed per-OS Installation instructions in the INSTALL.md file.

Usage

All installation methods use the run_fastsurfer.sh call interface (replace *fastsurfer-flags* with FastSurfer flags), which is the general starting point for FastSurfer. However, there are different ways to call this script depending on the installation, which we explain here:

  1. For container installations, you need to define the hardware and mount the folders with the input (/data) and output data (/output):
    (a) For singularity, the syntax is

    singularity exec --nv \
                     --no-home \
                     -B /home/user/my_mri_data:/data \
                     -B /home/user/my_fastsurfer_analysis:/output \
                     -B /home/user/my_fs_license_dir:/fs_license \
                     ./fastsurfer-gpu.sif \
                     /fastsurfer/run_fastsurfer.sh 
                     *fastsurfer-flags*
    

    The --nv flag is needed to allow FastSurfer to run on the GPU (otherwise FastSurfer will run on the CPU).

    The --no-home flag tells singularity to not mount the home directory (see Singularity documentation for more info).

    The -B flag is used to tell singularity, which folders FastSurfer can read and write to.

    See also Example 2 for a full singularity FastSurfer run command and the Singularity documentation for details on more singularity flags.

    (b) For docker, the syntax is

    docker run --gpus all \
               -v /home/user/my_mri_data:/data \
               -v /home/user/my_fastsurfer_analysis:/output \
               -v /home/user/my_fs_license_dir:/fs_license \
               --rm --user $(id -u):$(id -g) \
               deepmi/fastsurfer:latest \
               *fastsurfer-flags*
    

    The --gpus flag is needed to allow FastSurfer to run on the GPU (otherwise FastSurfer will run on the CPU).

    The -v flag is used to tell docker, which folders FastSurfer can read and write to.

    See also Example 1 for a full FastSurfer run inside a Docker container and the Docker documentation for more details on the docker flags including --rm and --user.

  2. For a native install, you need to activate your FastSurfer environment (e.g. conda activate fastsurfer_gpu) and make sure you have added the FastSurfer path to your PYTHONPATH variable, e.g. export PYTHONPATH=$(pwd).

    You will then be able to run fastsurfer with ./run_fastsurfer.sh *fastsurfer-flags*.

    See also Example 3 for an illustration of the commands to run the entire FastSurfer pipeline (FastSurferCNN + recon-surf) natively.

FastSurfer_Flags

Please refer to FASTSURFER_FLAGS.

Examples

All the examples can be found here: FASTSURFER_EXAMPLES

Output files

Modules output can be found here: FastSurfer_Output_Files

System Requirements

Recommendation: At least 8 GB system memory and 8 GB NVIDIA graphics memory --viewagg_device gpu

Minimum: 7 GB system memory and 2 GB graphics memory --viewagg_device cpu --vox_size 1

Minimum CPU-only: 8 GB system memory (much slower, not recommended) --device cpu --vox_size 1

Minimum Requirements:

--viewagg_device Min GPU (in GB) Min CPU (in GB)
1mm gpu 5 5
1mm cpu 2 7
0.8mm gpu 8 6
0.8mm cpu 3 9
0.7mm gpu 8 6
0.7mm cpu 3 9

Expert usage

Individual modules and the surface pipeline can be run independently of the full pipeline script documented in this documentation. This is documented in READMEs in subfolders, for example: whole brain segmentation only with FastSurferVINN, cerebellum sub-segmentation, hypothalamic sub-segmentation and surface pipeline only (recon-surf).

Specifically, the segmentation modules feature options for optimized parallelization of batch processing.

FreeSurfer Downstream Modules

FreeSurfer provides several Add-on modules for downstream processing, such as subfield segmentation ( hippocampus/amygdala, brainstem, thalamus and hypothalamus ) as well as TRACULA. We now provide symlinks to the required files, as FastSurfer creates them with a different name (e.g. using "mapped" or "DKT" to make clear that these file are from our segmentation using the DKT Atlas protocol, and mapped to the surface). Most subfield segmentations require wmparc.mgz and work very well with FastSurfer, so feel free to run those pipelines after FastSurfer. TRACULA requires aparc+aseg.mgz which we now link, but have not tested if it works, given that DKT-atlas merged a few labels. You should source FreeSurfer 7.3.2 to run these modules.

Intended Use

This software can be used to compute statistics from an MR image for research purposes. Estimates can be used to aggregate population data, compare groups etc. The data should not be used for clinical decision support in individual cases and, therefore, does not benefit the individual patient. Be aware that for a single image, produced results may be unreliable (e.g. due to head motion, imaging artefacts, processing errors etc). We always recommend to perform visual quality checks on your data, as also your MR-sequence may differ from the ones that we tested. No contributor shall be liable to any damages, see also our software LICENSE.

References

If you use this for research publications, please cite:

Henschel L, Conjeti S, Estrada S, Diers K, Fischl B, Reuter M, FastSurfer - A fast and accurate deep learning based neuroimaging pipeline, NeuroImage 219 (2020), 117012. https://doi.org/10.1016/j.neuroimage.2020.117012

Henschel L*, Kuegler D*, Reuter M. (*co-first). FastSurferVINN: Building Resolution-Independence into Deep Learning Segmentation Methods - A Solution for HighRes Brain MRI. NeuroImage 251 (2022), 118933. http://dx.doi.org/10.1016/j.neuroimage.2022.118933

Faber J*, Kuegler D*, Bahrami E*, et al. (*co-first). CerebNet: A fast and reliable deep-learning pipeline for detailed cerebellum sub-segmentation. NeuroImage 264 (2022), 119703. https://doi.org/10.1016/j.neuroimage.2022.119703

Estrada S, Kuegler D, Bahrami E, Xu P, Mousa D, Breteler MMB, Aziz NA, Reuter M. FastSurfer-HypVINN: Automated sub-segmentation of the hypothalamus and adjacent structures on high-resolutional brain MRI. Imaging Neuroscience 2023; 1 1–32. https://doi.org/10.1162/imag_a_00034

Stay tuned for updates and follow us on X/Twitter.

Acknowledgements

This project is partially funded by:

The recon-surf pipeline is largely based on FreeSurfer.

fastsurfer's People

Contributors

36000 avatar af-a avatar agirodi avatar clepol avatar dependabot[bot] avatar dkuegler avatar engrosamaali91 avatar jrussell9000 avatar kdiers avatar lehenschel avatar m-reuter avatar mscheltienne avatar neginshirvani avatar oikosohn avatar pcamach2 avatar pmoonesi avatar santiestrada32 avatar taha-abdullah avatar tashrifbillah avatar vellaro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastsurfer's Issues

wmparc file?

Hi,

Is FastSurfer able to generate a wmparc.mgz that is found in FreeSurfer? If so, can you please advise on how to do this.

Thanks for your help,

Vinny

Do you have the docker image?

Hi, do you guys distribute the docker image? I couldn't find/pull fastsurfer:gpu. I can build the image but it would be convenient to use one that you might have hosted somewhere.

Add FastSurfer to Open Neuroscience

Hello!

We are reaching out because we would love to have your project listed on Open Neuroscience, and also share information about this project:

Open Neuroscience is a community run project, where we are curating and highlighting open source projects related to neurosciences!

Briefly, we have a website where short descritptions about projects are listed, with links to the projects themselves, their authors, together with images and other links.

Once a new entry is made, we make a quick check for spam, and publish it.

Once published, we make people aware of the new entry by Twitter and a Facebook group.

To add information about their project, developers only need to fill out this form

In the form, people can add subfields and tags to their entries, so that projects are filterable and searchable on the website!

The reason why we have the form system is that it makes it open for everyone to contribute to the website and allows developers themselves to describe their projects!

Also, there are so many amazing projects coming out in Neurosciences that it would be impossible for us to keep track and log them all!

Open Neuroscience tech stack leverages open source tools as much as possible:

  • The website is based on HUGO + Academic Theme
  • Everything is hosted on github here
  • We use plausible.io to see visit stats on the website. It respects visitors privacy, doesn't install cookies on your computer
    • You can check our visitor stats here

Please get in touch if you have any questions or would like to collaborate!

Mask Value Meaning

This is a beginner's question. I used FastSurferCNN for segmentation and got the segmentation result mask, but I could not know the name of the corresponding value of mask, please tell me where to find it.

WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

Hi all,

After setting up Docker, I ran the following command docker run --gpus all -v /home/user/my_mri_data:/data \ -v /home/user/my_fastsurfer_analysis:/output \ -v /home/user/my_fs_license_dir:/fs60 \ --rm --user XXXX fastsurfer:gpu \ --fs_license /fs60/.license \ --t1 /data/subject2/orig.mgz \ --sid subject2 --sd /output \ --parallel
from https://github.com/Deep-MI/FastSurfer/blob/master/Docker/README.md
with the file path, user ID, and the license key file path replaced accordingly.

However, the following warning showed up:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

I searched the exact result online and found that many people using Docker on Apple Silicon chips (M1) were having similar problems. The solution, according to the Docker official documentation https://docs.docker.com/docker-for-mac/apple-silicon/ was to add the following command to our existing command
--platform linux/amd64
I tried adding the command after --parallel but the same error still showed up.

Why GPU is not detected?

Hello,

I used the following command to run a test case:

./run_fastsurfer.sh --t1 $datadir/P008/P008_T1_mri_orig.mgz
--sid P008 --sd $fastsurferdir

But no GPU is used when the program is running. I have several GPUs available, the recon-all.log file shows:
Cuda available: False, # Available GPUS: 0, Cuda user disabled (--no_cuda flag): False, --> Using device: cpu

Does anyone know why this happens?
Thanks.

Question: GPU Usage and Volume extraction

I am using the FastSurfer inside Docker example. As in the example, I used "--gpu all", to use both of mine GTX 1080Ti cards. However, monitoring the GPU usage I see no more than 5% of one GPU being used, Screenshot with GPU and CPU usage.

Also, how can I specify only volume extraction, ignoring the cortical thickness?

(Thank you for your great contribution to the community. Your article is very interesting and this tool will be of great use for many studies.)

Accepted File Formats

Hi there,
What are the accepted file formats for FastSurfer? I would like to use it to improve speed/performance compared to FreeSurfer, but in all examples you provide a "orig.mgz" file which is a file created by FreeSurfer?

Many Thanks,
Chris

About training data

Hi. Just read ur guy's paper. Amazing work! However, one thing that I found confusing is that what data did you guys used as ground truth to calculate loss? Did you pass all training samples into freesuffer and use the results as ground truth? Thank you in advance!

hanging docker images from the setup

There are at least 9 docker images that are created during the FastSurfer setup that never gets deleted.
image

Even when I use the docker force remove option on the images docker rmi -f IMAGE_ID I get the following error and can't delete these temporary images
image

System info:
CentOS 7 X64
Docker version: 18.09.9
CPU: Intel Xeon Gold 5118
GPU: Tesla P4

Did you guys see similar behavior or have suggestion to cleanup the docker images a bit?
Thanks

Error when running 'mris_make_surfaces'

Hi!
Thank you so much for developing and sharing FastSurfer. We were so exited and tried it out right away.
However, the programm exited with an error while running the command 'mris_make_surfaces'. lh.white.preaparc was successfully produced while lh.curv, lh.area and lh.cortex.label failed.

My command:

./run_fastsurfer.sh --fs_license "$fs_license" --seg "$fastsurfer_dir/${subid}/aparc.DKTatlas+aseg.deep.mgz" --t1 "$t1w_path" --sid "$subid" --sd "$fastsurfer_dir" --mc --qspec --nofsaparc --threads 4

And the error goes (full log plz see the attached)
recon-surf.log:

@#@FSTIME  2020:06:25:17:26:06 recon-all N 10 e 747.18 S 1.46 U 754.74 P 101% M 543504 F 0 R 1019906 W 0 c 290 w 129817 I 0 O 53248 L 2.06 2.40 2.06
@#@FSLOADPOST 2020:06:25:17:38:33 recon-all N 10 1.03 1.17 1.50
./mris_make_surfaces -aseg ../mri/aseg.presurf -white white.preaparc -noaparc -whiteonly -mgz -T1 brain.finalsurfs HCD0001305_V1_MR lh
Could not set locale
No such file or directory
Could not set locale
Could not set locale
using white.preaparc as white matter name...
only generating white matter surface
using aseg volume ../mri/aseg.presurf to prevent surfaces crossing the midline
not using aparc to prevent surfaces crossing the midline
INFO: assuming MGZ format for volumes.
using brain.finalsurfs as T1 volume...
$Id: mris_make_surfaces.c,v 1.172 2017/02/16 19:42:36 fischl Exp $
writing white matter surface to /nfs/s2/userhome/liuxingyu/workingdir/temp/HCP-D/anat/HCD0001305_V1_MR/surf/lh.white.preaparc...
LabelErode: NULL label
000: dt: 0.0000, sse=16436.0, rms=2.019
rms = 2.23, time step reduction 1 of 3 to 0.250...
025: dt: 0.2500, sse=15711.4, rms=1.601 (20.681%)
026: dt: 0.2500, sse=15171.1, rms=1.388 (13.312%)
rms = 1.38, time step reduction 2 of 3 to 0.125...
027: dt: 0.2500, sse=15745.9, rms=1.381 (0.518%)
028: dt: 0.1250, sse=15340.2, rms=1.228 (11.073%)
rms = 1.20, time step reduction 3 of 3 to 0.062...
029: dt: 0.1250, sse=15052.7, rms=1.197 (2.562%)
positioning took 0.1 minutes
generating cortex label...
1 non-cortical segments detected
only using segment with 166858 vertices
No such file or directory
LabelDilate: NULL label
No such file or directory
Command terminated by signal 11
@#@FSTIME  2020:06:25:17:38:34 ./mris_make_surfaces N 11 e 58.02 S 1.39 U 76.22 P 133% M 1676756 F 0 R 1192850 W 0 c 116 w 1844 I 0 O 11776 L 1.03 1.17 1.50
@#@FSLOADPOST 2020:06:25:17:39:32 ./mris_make_surfaces N 11 1.35 1.25 1.51
Command exited with non-zero status 1
@#@FSTIME  2020:06:25:17:25:15 /nfs/s2/userhome/liuxingyu/workingdir/temp/HCP-D/anat/HCD0001305_V1_MR/scripts/lh.processing.cmdf N 0 e 856.56 S 29.45 U 937.15 P 112% M 1676756 F 0 R 3432110 W 0 c 592 w 339741 I 16 O 138680 L 1.27 2.37 2.04
@#@FSLOADPOST 2020:06:25:17:39:32 /nfs/s2/userhome/liuxingyu/workingdir/temp/HCP-D/anat/HCD0001305_V1_MR/scripts/lh.processing.cmdf N 0 1.35 1.25 1.51

Could you help us to fix it?
Thanks!
Best,
Xingyu

No further processed results using Docker version

Hi, I am trying to use fastsurfer:cpu using docker since I am using MacOS (High Sierra) and don't have any NVIDIA graphic card.
I tried to follow README file to run single orig.mgz file but I am stuck in the initial step.
After the following message,

"Cuda available: False, # Available GPUS: 0, Cuda user disabled (--no_cuda flag): True, --> Using device: cpu"

The processing finished without any error message and the output directory did not have any created files except empty directories 'mri' and 'scripts' with deep-seg.log file.

Do you have any idea to fix this issue ?

Thanks for your kind comments in advance.

Problem with running Fassurfer on Mac

Hi FastSurfer team,
I am a new user of your pipeline. After reading about it, I am very excited to run your pipeline on my dataset.

I am using Mac Catalina (10.15.7). I installed Docker and followed the instructions in your Docker folder. I build FastSurfer on CPU and ran :

docker run -v /Users/mtay316/Documents/Fast_surfer/my_mri_data/data:/data
-v /Users/mtay316/Documents/Fast_surfer/my_fastsurfer_analysis/output:/output
-v /Users/mtay316/Documents/Fast_surfer:/fs60
--rm --user 504 fastsurfer:cpu
--fs_license /fs60/license.txt
--t1 /data/subject_01/orig.mgz
--no_cuda
--sid subject_01 --sd /output
--parallel

The problem is after running these commands, I can see two folders (mri , scripts) are being made in my output directory but there is just one .log file in the script folder with this message:

python3.6 eval.py --in_name /data/subject_01/orig.mgz --out_name /output/subject_01/mri/aparc.DKTatlas+aseg.deep.mgz --order 1 --network_sagittal_path /fastsurfer/checkpoints/Sagittal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_axial_path /fastsurfer/checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_coronal_path /fastsurfer/checkpoints/Coronal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --batch_size 8 --simple_run --no_cuda
Reading volume /data/subject_01/orig.mgz
Cuda available: False, # Available GPUS: 0, Cuda user disabled (--no_cuda flag): True, --> Using device: cpu

Can you help me to fix this problem?

Regards,
Maryam

Statistic Information

Hey,

Thanks for sharing your great work! It helps our team to finish our work quickly.
May I ask do you have the function to print the statistic information of the segmented brain structure? Like the volume of each brain area? I searched the log file of pipeline FastsurferCNN but only find the time consuming information.
If not, can you add this function? I believe this function makes your work better and better. ^ ^
Best,
Bryan

Mask for tumors / prior surgeries

Hi.
Amazing tool, thank you.
We often see subjects with tumors or prior resections that results in distorted or even missing brain structures. Do you plan to implement mask usage to avoid segmentation errors on such abnormal brains?
Thanks,
Cristi

nibabel problem

Hi,
I am trying to use FastFurfer segmentation only. but It seems in the very beginning it cannot import nibabel.
I use python version 3.6 and nibabel==2.5.1

when I run

(env) 0 [mbaniasadi2@iris-169 FastSurfer](2037687 1N/1T/8CN)$ bash run_fastsurfer.sh --fs_license license.txt --sid subject1 --sd /work/projects/ins_dbs/fastsurfer --t1 /work/projects/ins_dbs/fastsurfer/subject1/t1.nii.gz --seg_only

I get the following error:

/work/projects/ins_dbs/fastsurfer/FastSurfer/FastSurferCNN /work/projects/ins_dbs/fastsurfer/FastSurfer
python3.6 eval.py --in_name /work/projects/ins_dbs/fastsurfer/subject1/t1.nii.gz --out_name /work/projects/ins_dbs/fastsurfer/subject1/mri/aparc.DKTatlas+aseg.deep.mgz --order 1 --network_sagittal_path ../checkpoints/Sagittal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_axial_path ../checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_coronal_path ../checkpoints/Coronal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --batch_size 8 --simple_run
Traceback (most recent call last):
File "eval.py", line 19, in
import nibabel as nib
File "/mnt/irisgpfs/projects/ins_dbs/fastsurfer/env/lib/python3.6/site-packages/nibabel/init.py", line 62, in
from . import analyze as ana
File "/mnt/irisgpfs/projects/ins_dbs/fastsurfer/env/lib/python3.6/site-packages/nibabel/analyze.py", line 87, in
from .volumeutils import (native_code, swapped_code, make_dt_codes,
File "/mnt/irisgpfs/projects/ins_dbs/fastsurfer/env/lib/python3.6/site-packages/nibabel/volumeutils.py", line 23, in
from .openers import Opener, BZ2File
File "/mnt/irisgpfs/projects/ins_dbs/fastsurfer/env/lib/python3.6/site-packages/nibabel/openers.py", line 16, in
from bz2 import BZ2File
File "/opt/apps/resif/data/production/v1.1-20180716/default/software/lang/Python/3.6.4-foss-2018a/lib/python3.6/bz2.py", line 23, in
from _bz2 import BZ2Compressor, BZ2Decompressor
ImportError: libbz2.so.1.0: cannot open shared object file: No such file or directory

Thank you for the help,
Mehri

Holes in brain mask

Hi all!
First, let me just say that FastSurfer is a great tool and I am impressed with what it can do. That being said, I do have some issues with the masks created in the process. The masks consistently have holes in them, ranging from one single voxel to several as in the provided screenshot.

brain_mask_holes

This particular mask was created by running the script without any extra parameters (so just specifying the input image and the output directory and subject id). The input image is a UNIDEN image (T1w denoised) with the dimensions 192x240x256 (which gets converted to 256x256x256 by FS).
It’s entirely possible it is all because of me doing something strange so any help is appreciated!

All the best,
Daniel

[Question] Parallel processing in recon-surf

In this block, you have set up a bunch of variables for parallel processing:

fsthreads=""
export OMP_NUM_THREADS=$threads
export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=$threads
if [ "$threads" -gt "1" ]
then
  fsthreads="-threads $threads -itkthreads $threads"
fi

while this says -parallel -openmp <num> only. So why is this discrepancy?

Memory usage

Hello,

I am trying to run FastSurfer on a single image with this code:

./run_fastsurfer.sh --t1 /home/koba/Desktop/sub-001/anat/orig.mgz --sid subject1 --sd /home/koba/Desktop/sub-001/fsout --parallel --py python3.7 --batch 2

The script fills my swap memory after Coronal Testing and stops no matter what. It doesn't return any error codes. If I set the batch size >10, it tells me this:

RuntimeError: CUDA out of memory. Tried to allocate 320.00 MiB (GPU 0; 3.95 GiB total capacity; 2.49 GiB already allocated; 163.31 MiB free; 3.04 GiB reserved in total by PyTorch)

My computer has 16 gb of ram (2 gb swap) and nvidia geforce 1050 gpu ( 4 gb)

Does that mean my computer is just insufficient for the code? Or Am I missing something in the installation?

Thank you very much!

[Discussion] Run time discrepancy

According to https://github.com/Deep-MI/FastSurfer#overview, (i) should take ~1 min while (ii) should take ~1 hour. On the other hand, they respectively took ~3 mins and ~3 hours for me.

So what machine/GPU did you use? My interest would be to bring the runtime down to the stipulated ones if they are at all possible.

FYI, I used this command:

time docker run --gpus all -v FastSurfer:/root/FastSurfer -v pwd:/root/data -v /home/tb571/freesurfer:/root/fs_license/ --rm fastsurfer:gpu --t1 /root/data/003_T1w_resampled.nii.gz --sd /root/FastSurfer/ --sid 003 --parallel --batch 4 --fs_license /root/fs_license/license.txt

And my data resolution:

[root@pnl-z840-2 fs_test]# fslinfo 003_T1w_resampled.nii.gz
data_type      FLOAT32
dim1           256
dim2           256
dim3           44
dim4           1
datatype       16
pixdim1        1.000000
pixdim2        1.000000
pixdim3        4.000000
pixdim4        1.000000
cal_max        0.0000
cal_min        0.0000
file_type      NIFTI-1+

./docker_build.sh: line 3: docker: command not found

Hi,
I am trying to use FastSurfer on the MacBook Pro with the M1 chip containing 16 GM of memory.
I downloaded FastSurfer from GitHub, obtained a license key file in .txt format from the given link in the tutorial. After that, I tried to set up Docker by running the following command:
./docker_build.sh
inside my Docker directory, but it showed the following message:
./docker_build.sh: line 3: docker: command not found

Prior to this I ran
./run_fastsurfer.sh --help
inside the folder "FastSurfer-master" and met no problems.

I am almost certain that I ran the command in the correct directory and that I downloaded all the required files. Is there maybe another reason why my Docker isn't responding?

Conform.py function deforms some MRI scans

Hi,

FastSurfer is a great work, I have been using it since it was released, I have a question though.
For some images, I dont know why the conform.py function deforms the images?
The files are big to upload here, so I uploaded only a screenshot. Please find the full files before and after the conform.py function here:
https://mega.nz/folder/Bo8z2KgR#xlamrUXuJDwSWBq8fEobIg

Could you please help me why this happens, because FastSufer does not give a reasonable segmentation on the files that are deformed during the preprocessing.

before_68
after_68

Thank you,
Mehri

ERROR: flag --mask not recognized

While running recon-surf.sh, I got the following error:
ERROR: flag --mask not recognized.
On my machine mri_nu_convert.mni version is : mri_nu_correct.mni,v 1.18.2.1 2013/01/09 21:23:42 nicks Exp $
When I checked the documentation for mri_nu_convert.mni, there is no flag --mask.
Please let me know if there is a way around this.

Integrate with fmriprep

My lab would love to use your program instead of FreeSurfer to speed up our pre-processing pipeline tenfold. However, we use fmriprep for our pre-processing and currently FastSurfer is not compatible with fmriprep due to the naming of the output files. If the output files could be changed to match the naming and structure of the FreeSurfer outputs that would enable us and many others to use your program with fmriprep.

I have also reached out to fmriprep to see if they could potentially integrate your program into their pipeline. Looks like the easiest solution would be changing the names of the output files from FastSurfer or providing a flag the would enable the option to do so.
nipreps/fmriprep#2216

failed to create docker image

Hi,

When I build the docker image from "Dockerfile_FastSurferCNN_CPU", I get the following error after at the "# Add FastSurfer (copy application code) to docker image" stage.

error message:
COPY failed: stat /var/lib/docker/tmp/docker-builder791633763/FastSurferCNN: no such file or directory

System: Windows 10 (2004, Build 19041) + Docker Desktop Community (2.3.0.3)

Skip the brain extraction step in FastSurfer

Hi,

I am trying to run docker versioned Fastsurfer without brain extraction step but I don't know how to to that.
In freesurfer, I usually flagged -skullstrip when I run recon-all process.
Is it possible to use that option in FastSurfer ?

Best,
Yooha

custom data in google colab do not work for me

Thanks to Leoni Henschel for the nice tutorial via ISMRM this morning.

I tested the Tutorial FastSurferCNN-QuickSeg via google colab following the live tutorial.
It works for me with the "Alternative" solution of using the provided data (FreeSurfer tutorial). However, it does not seem to work with any other data.

1.) As soon as I used the "alternative solution once, it seems to look at the wrong location (/content vs /content/fastfurfer).

2.) It does not seem to be able to find the atribute "data" (full error message at his link). I tried multiple MP2RAGE data in nii/nii.gz/mgz. But I always get the same error.
Can you point me to descriptions of how I need to prepare my data to work for the tutorial?

Thanks a lot, Renzo

Unable to run fastsurfer on cuda on Geforce 3070x with 8 GB

python3.8 eval.py --in_name /home/knutjb/subjects/bert_fast/mri/orig/001.mgz --out_name /home/knutjb/subjects//bert_fast/mri/aparc.DKTatlas+aseg.deep.mgz --order 1 --network_sagittal_path /home/knutjb/sources/FastSurfer/checkpoints/Sagittal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_axial_path /home/knutjb/sources/FastSurfer/checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_coronal_path /home/knutjb/sources/FastSurfer/checkpoints/Coronal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --batch_size 8 --simple_run --run_viewagg_on check
Reading volume /home/knutjb/subjects/bert_fast/mri/orig/001.mgz
Cuda available: True, # Available GPUS: 1, Cuda user disabled (--no_cuda flag): False, --> Using device: cuda
Loading Axial
Successfully loaded Image from /home/knutjb/subjects/bert_fast/mri/orig/001.mgz
Loading Axial Net from /home/knutjb/sources/FastSurfer/checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl
Axial model loaded.
Traceback (most recent call last):
File "eval.py", line 462, in
fastsurfercnn(options.iname, options.oname, use_cuda, small_gpu, logger, options)
File "eval.py", line 308, in fastsurfercnn
pred_prob = run_network(img_filename,
File "eval.py", line 211, in run_network
temp = model(images_batch)
File "/home/knutjb/python3.8/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/extra_space2/sources/FastSurfer/FastSurferCNN/models/networks.py", line 69, in forward
encoder_output1, skip_encoder_1, indices_1 = self.encode1.forward(x)
File "/extra_space2/sources/FastSurfer/FastSurferCNN/models/sub_module.py", line 263, in forward
out_block = super(CompetitiveEncoderBlockInput, self).forward(x) # To be concatenated as Skip Connection
File "/extra_space2/sources/FastSurfer/FastSurferCNN/models/sub_module.py", line 201, in forward
x2 = torch.cat((x2_bn, x1_bn), dim=4) # Concatenating along the 5th dimension
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 7.79 GiB total capacity; 5.49 GiB already allocated; 269.06 MiB free; 5.49 GiB reserved in total by PyTorch)
(python3.8) knutjb@knut:~/sources/FastSurfer$ ./run_fastsurfer_kjb.sh --t1 ~/subjects/bert_fast/mri/orig.mgz --sid bert_fast --sd ~/subjects/ --parallel --threads 4 --no_cuda
Setting ENV variable FASTSURFER_HOME to current working directory /home/knutjb/sources/FastSurfer.
Change via enviroment to location of your choice if this is undesired (export FASTSURFER_HOME=/dir/to/FastSurfer)

Process killed

Hello! First of all, thanks for pushing this out. I've been testing this on my end directly with the run_fastsurfer.py file. Unfortunately the process gets killed and I'm not sure why. I do suspect it may have to do with memory.

(fastsurfer) sid@sid-research:~/Repos/FastSurfer$ python run_fastsurfer.py --sid IAM_1116 --sd ~/Desktop/FastSurfer/Out/ --seg /home/sid/Desktop/FastSurfer/Out/IAM_1116/aparc+aseg_deep.mgz --t1 '/home/sid/Desktop/FastSurfer/In/IAM_1116/anat/IAM_1116_5_t1_mprage_sag_p2_iso__.nii.gz' --n_threads 16
Reading volume /home/sid/Desktop/FastSurfer/In/IAM_1116/anat/IAM_1116_5_t1_mprage_sag_p2_iso__.nii.gz
Conforming image to UCHAR, RAS orientation, and 1mm isotropic voxels
Input:    min: 0  max: 684
rescale:  min: 0.0  max: 454.86  scale: 0.5606120564569318
Output:   min: 0.0  max: 255.0
Loading Axial
Successfully loaded Image from /home/sid/Desktop/FastSurfer/In/IAM_1116/anat/IAM_1116_5_t1_mprage_sag_p2_iso__.nii.gz
Loading Sagittal
Successfully loaded Image from /home/sid/Desktop/FastSurfer/In/IAM_1116/anat/IAM_1116_5_t1_mprage_sag_p2_iso__.nii.gz
Loading Coronal.
Successfully loaded Image from /home/sid/Desktop/FastSurfer/In/IAM_1116/anat/IAM_1116_5_t1_mprage_sag_p2_iso__.nii.gz
Cuda available: True, # Available GPUS: 1, Cuda user disabled (--no_cuda flag): False, --> Using device: cuda
Loading Axial Net from ./checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl
Axial model loaded.
--->Batch 0 Axial Testing Done.
--->Batch 1 Axial Testing Done.
--->Batch 2 Axial Testing Done.
--->Batch 3 Axial Testing Done.
--->Batch 4 Axial Testing Done.
--->Batch 5 Axial Testing Done.
--->Batch 6 Axial Testing Done.
--->Batch 7 Axial Testing Done.
--->Batch 8 Axial Testing Done.
--->Batch 9 Axial Testing Done.
--->Batch 10 Axial Testing Done.
--->Batch 11 Axial Testing Done.
--->Batch 12 Axial Testing Done.
--->Batch 13 Axial Testing Done.
--->Batch 14 Axial Testing Done.
--->Batch 15 Axial Testing Done.
Axial View Tested in 18.0291 seconds
Loading Coronal Net from ./checkpoints/Coronal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl
Killed

This is running in a Python 3.6 environment on a system with Ryzen 2700x, GTX1080 and 16 GB RAM.

EDIT: Upon further investigation, it is indeed an out of memory error:

[Wed Jun 24 14:40:37 2020] Out of memory: Killed process 20341 (python) total-vm:31198980kB, anon-rss:11861040kB, file-rss:99992kB, shmem-rss:12288kB, UID:1000 pgtables:26172kB oom_score_adj:0

That translates to ~32 gigabytes of memory that the script requested. I'll try this on a system with larger memory.

fastsurferCNN eval.py memory usage

Hi,
I downloaded your code and tried to run eval.py in fastsurferCNN folder with this command
python eval.py --i_dir 'adni' --o_dir output --in_name 1.nii --order 1 --batch_size 1 --simple_run false
our computer has 32GB ram and our GPU is 11GB 2080ti.
Before running the code, I check the free memory is 28GB. Before running
_, prediction_image = torch.max(torch.add(torch.mul(torch.add(prediction_probability_axial, prediction_probability_coronal), 0.4), torch.mul(prediction_probability_sagittal, 0.2)), 3)
the free memory is 8GB and then the script exit unexpectedly with no warning.
I also separate the code into 4 sentence. After running
a= torch.add(prediction_probability_axial, prediction_probability_coronal)
the free memory is 2.6GB. And exit dealing with the b = torch.mul(a,0.4).
I believe it should be a problem with the memory, and it's very similar with Issue #2.

using different atlas

Hi this tool is very helpful for segmentation and I wonder if it allows for segmenting with a different FS atlas, e.g. a2009s? Haven't seen a similar post but please point out if any. Thanks!

docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

Hi FastSurfer Team and others,

I am trying to use FastSurfer on a MacBook Pro carrying the Apple Silicon M1 chip. I've encountered 2 issues already specific to those using the Apple Silicon chip and resolved them (see previously posted issues), but this one seems like an issue just for Docker (see screenshot below).

I have searched online already the returned error but all I got was that I was supposed install nvidia-container-toolkit via sudo apt-get. However, apt-get doesn't work on Mac as far as I know, and Homebrew does not have nvidia-container-toolkit available to install.

Regards, Ian L

Screen Shot 2021-07-11 at 3 07 11 PM

SubmillimeterRecon

Hi all!

First of all congratulations, FastSurfer is really impressive and a great tool!

As you are of course aware, FreeSurfer has the option to do SubmillimeterRecon .

Any plans to support this in FastSurfer? Would be nice to work in native resolution with high-res scans.

All the best,
Marco

this scriptdidn't process it

Hello,

I ran this script and didn't process it. The results are empty.I don't know where the problem is, can you answer me?And if I ran python ,I found that “killed" on the terminal.
Thanks!
image
image
image
image

CUDA initialization: Found no NVIDIA driver on your system

Hi, I am sorry. I write here because of the error I am having may be due to requirements. I am running the whole segmentation and after the trials in coronal, axial, sagittal the process stops and it says it does not recognize the original MRI. The error above that is:

/home/osboxes/anaconda3/lib/python3.8/site-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0

Is there a specific NVIDIA I have to download and install? I am running fastsurfer on a linux on a Ubuntu. Thank you so much for any info you can give me!

Originally posted by @SaraRomanella in #18 (comment)

recon-all "No such file or directory" error

Hi,

I was running the full pipeline on a T1 weighted nii image in docker. During recon-surf the error below occurred.

mri_pretess done

#--------------------------------------------
#@# Fill Mon Jul 6 08:27:41 UTC 2020
/output/subject2/mri

mri_fill -a ../scripts/ponscc.cut.log -xform transforms/talairach.lta -segmentation aseg.auto_noCCseg.mgz wm.mgz filled.mgz

logging cutting plane coordinates to ../scripts/ponscc.cut.log...
INFO: Using transforms/talairach.lta and its offset for Talairach volume ...
using segmentation aseg.auto_noCCseg.mgz...
reading input volume...done.
searching for cutting planes...voxel to talairach voxel transform
1.27551 -0.07937 0.10293 -45.56485;
0.09484 1.40723 0.08887 -90.22624;
-0.12391 0.53786 2.07099 -235.68204;
0.00000 0.00000 0.00000 1.00000;
voxel to talairach voxel transform
1.27551 -0.07937 0.10293 -45.56488;
0.09484 1.40723 0.08888 -90.22624;
-0.12391 0.53786 2.07099 -235.68204;
0.00000 0.00000 0.00000 1.00000;
reading segmented volume aseg.auto_noCCseg.mgz...
Looking for area (min, max) = (350, 1400)
area[0] = 57 (min = 350, max = 1400), aspect = 0.85 (min = 0.10, max = 0.75)
need search nearby
using seed (112, 161, 56), TAL = (16.0, -72.0, -33.0)
talairach voxel to voxel transform
0.77559 0.05945 -0.04110 31.01771;
-0.05612 0.71816 -0.02803 55.63380;
0.06098 -0.18296 0.48768 101.20889;
0.00000 0.00000 0.00000 1.00000;
segmentation indicates cc at (112, 161, 56) --> (16.0, -72.0, -33.0)
done.
could not find lh seed point around (139, 162, 107)
talairach cc position changed to (16.00, -72.00, -33.00)
Erasing brainstem...done.
seed_search_size = 9, min_neighbors = 5
search rh wm seed point around talairach space:(34.00, -72.00, -33.00) SRC: (111.19, 164.41, 104.80)
search lh wm seed point around talairach space (-2.00, -72.00, -33.00), SRC: (139.11, 162.39, 106.99)
No such file or directory
Linux 94f1d98586a7 5.3.0-61-generic #55~18.04.1-Ubuntu SMP Mon Jun 22 16:40:20 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

recon-all -s subject2 exited with ERRORS at Mon Jul 6 08:27:46 UTC 2020

For more details, see the log file /output/subject2/scripts/recon-all.log
To report a problem, see http://surfer.nmr.mgh.harvard.edu/fswiki/BugReporting

Command exited with non-zero status 1
@#@FSTIME 2020:07:06:08:23:53 recon-all N 7 e 233.58 S 2.63 U 230.30 P 99% M 1129484 F 1097 R 1036284 W 0 c 835 w 1836 I 259416 O 4064 L 1.27 1.80 2.19
@#@FSLOADPOST 2020:07:06:08:27:46 recon-all N 7 1.09 1.40 1.93

It looks like a freesurfer-specific issue. I wonder if there is any way I can diagnose it.

Thank you,
Zhengyang

RuntimeError: affine matrix has wrong number of columns

Hello, I successfully run FastSurfer, but the program has thrown this error, please check what the problem is caused by, thank you

Log file for fastsurfercnn eval.py
Thu 13 Aug 16:31:51 CST 2020

python3.6 eval.py --in_name /home/data_1/zhaoyang/toolbox/FastSurfer-fix-requirements/example/T1/01.nii --out_name /home/data_1/zhaoyang/toolbox/FastSurfer-fix-requirements/example/output/01/mri/aparc.DKTatlas+aseg.deep.mgz --order 1 --network_sagittal_path ../checkpoints/Sagittal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_axial_path ../checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_coronal_path ../checkpoints/Coronal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --batch_size 16 --simple_run
Reading volume /home/data_1/zhaoyang/toolbox/FastSurfer-fix-requirements/example/T1/01.nii
Conforming image to UCHAR, RAS orientation, and 1mm isotropic voxels
Input:    min: 0  max: 17004
rescale:  min: 0.0  max: 10321.428000000002  scale: 0.024705883720740965
Traceback (most recent call last):
  File "eval.py", line 483, in <module>
    fast_surfer_cnn(options.iname, options.oname, options)
  File "eval.py", line 167, in fast_surfer_cnn
    header_info, affine_info, orig_data = load_and_conform_image(img_filename, interpol=args.order)
  File "/home/data_1/zhaoyang/toolbox/FastSurfer-fix-requirements/FastSurferCNN/data_loader/load_neuroimaging_data.py", line 44, in load_and_conform_image
    orig = conform(orig, interpol)
  File "/home/data_1/zhaoyang/toolbox/FastSurfer-fix-requirements/FastSurferCNN/data_loader/conform.py", line 229, in conform
    mapped_data = map_image(img, h1.get_affine(), h1.get_data_shape(), order=order)
  File "/home/data_1/zhaoyang/toolbox/FastSurfer-fix-requirements/FastSurferCNN/data_loader/conform.py", line 85, in map_image
    new_data = affine_transform(img.get_data(), inv(vox2vox), output_shape=out_shape, order=order)
  File "/home/zhaoyang/anaconda3/envs/py36/lib/python3.6/site-packages/scipy/ndimage/interpolation.py", line 467, in affine_transform
    raise RuntimeError('affine matrix has wrong number of columns')
RuntimeError: affine matrix has wrong number of columns

[Question] Space of orig and aparc.DKTatlas+aseg

The training input for FastSurfer is mrig/orig.mgz and mri/aparc.DKTatlas+aseg.mgz. My question--are the two files in any standard space such as MNI? I know the two are created by FreeSurfer but I do not know the answer.

I noticed at the top of FreeSurfer, mri_convert -c is applied that makes orig.mgz with the following:

voxel to ras transform:
               -1.0000   0.0000   0.0000   128.5000
                0.0000   0.0000   1.0000  -127.5000
                0.0000  -1.0000   0.0000   128.5000
                0.0000   0.0000   0.0000     1.0000

[Question] Height and width of an MRI?

fslinfo reports the following for my data:

data_type       INT16
dim1            176
dim2            256
dim3            256
dim4            1
datatype        4
pixdim1         1.000000
pixdim2         1.000000
pixdim3         1.000000
pixdim4         3.200000
cal_max         0.000000
cal_min         0.000000
file_type       NIFTI-1+

Then, what would be the height and width for this?

collections is not a package

(base) [tb571@pnl-oracle FastSurfer]$ pip install -r requirements.txt
Collecting collections (from -r requirements.txt (line 1))
  Could not find a version that satisfies the requirement collections (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for collections (from -r requirements.txt (line 1))

Crash in nu_correct if FOV > 256

Hi there-
I'm currently using the CPU Docker and I am running the standard docker command for the full FastSurfer as you have in the README, with the exception that I pass in a Nifti where the FOV is greater than 256 mm. WHen I run, the deep_seg runs fine but during the recon, I get a crash with the following key messages:
WARNING ==================++++++++++++++++++++++++=======================================
The physical sizes are (228.80 mm, 282.00 mm, 282.00 mm), which cannot fit in 256^3 mm^3 volume.
The resulting volume will have 282 slices.
If you find problems, please let us know ([email protected]).
==================================================++++++++++++++++++++++++===============
Mask volume and input volume must be the same size.
nu_evaluate: crashed while running evaluate_field (termination status=512)
nu_correct: crashed while running nu_evaluate (termination status=512)
ERROR: nu_correct
I run mri_convert with the --cw256 command outside of FastSurfer, and it no longer has this error. So I guess the question is would it be possible to have an option that conforms (and crops) images with FOV > 256 mm just like the original FreeSurfer allows.
Many thanks,
David Cash

Mac: syntax error near unexpected token `&'

hi,
Thank you for a wonderful job,while running locally, an error was reported as follows:
I have checked the .py file and found no specific reason for the error. Could you please tell me how to solve it?
Thanks very much.

./run_fastsurfer.sh: line 227: syntax error near unexpected token `&'

Originally posted by @zhangcc89claire in #6 (comment)

Error running FastSurfer: python 3.6 command not found

Hi, I was trying to run FastSurfer on a MRI and it gave me the following error:

~/freesurfer/FastSurfer-master/FastSurferCNN ~/freesurfer/FastSurfer-master
python3.6 eval.py --in_name /home/osboxes/freesurfer/Astro/001.mgz --out_name /home/osboxes/freesurfer/Astro/subject1/mri/aparc.DKTatlas+aseg.deep.mgz --order 1 --network_sagittal_path ../checkpoints/Sagittal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_axial_path ../checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_coronal_path ../checkpoints/Coronal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --batch_size 8 --simple_run
./run_fastsurfer.sh: line 291: python3.6: command not found

Even if I modify the default python 3.6 with the current I am using (3.8.3) the error is the same:
./run_fastsurfer.sh: line 291: 3.8.3: command not found

Could you please give me an idea of how fixing this? Thank you so much!

Help wanted

Can you guys give me an example workflow for using your docker fastsurfer:CPU on a Windows 10 host?
Thank you,
-SG

Error running FastSurfer on Mac

Hi,

I'm getting this error:
./run_fastsurfer.sh: line 285: syntax error near unexpected token &' ./run_fastsurfer.sh: line 285: date |& tee -a $seg_log'

When i run the code in the examples.
Can you help me out?

Thanks.
Best,Nuno

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.