mwaskom / lyman Goto Github PK
View Code? Open in Web Editor NEWData pipelines and analysis library for functional MRI
Home Page: http://www.cns.nyu.edu/~mwaskom/software/lyman
License: BSD 3-Clause "New" or "Revised" License
Data pipelines and analysis library for functional MRI
Home Page: http://www.cns.nyu.edu/~mwaskom/software/lyman
License: BSD 3-Clause "New" or "Revised" License
This is probably more accurate.
I ran run_group.py with no arguments to estimate a group-level model in MNI space. The program crashed with the error message below. Looks like "mni152reg" is being treated as an option, wondering if it should be "--srcreg mni152reg".
RuntimeError: Command:
mri_vol2surf --hemi lh --mni152reg --o /ncf/anl/anl10/REWMEM/analyses/analyses/workingdir/rm_pst/mni_group/surfproj/_l1_contrast_reward/_mni_hemi_lh/maskproj/lh.group_mask.mgz --projfrac-max 0.000 1.000 0.100 --mov /ncf/anl/anl10/REWMEM/analyses/analyses/workingdir/rm_pst/mni_group/_l1_contrast_reward/merge/group_mask.nii.gz --trgsubject fsaverage
Standard error:
ERROR: Option --mni152reg unknown
Yo dawg.
Errors within make_masks.py
are no longer propagated correctly
Traceback (most recent call last):
File "/home/mwaskom/anaconda/bin/make_masks.py", line 259, in <module>
main(sys.argv[1:])
File "/home/mwaskom/anaconda/bin/make_masks.py", line 66, in main
proj_args, args.save_native)
File "/home/mwaskom/anaconda/lib/python2.7/site-packages/lyman/maskfactory.py", line 142, in from_common_label
self.from_native_label(native_label_temp, hemis, proj_args)
File "/home/mwaskom/anaconda/lib/python2.7/site-packages/lyman/maskfactory.py", line 165, in from_native_label
self.execute(proj_cmds, indiv_mask_temp)
File "/home/mwaskom/anaconda/lib/python2.7/site-packages/lyman/maskfactory.py", line 283, in execute
raise RuntimeError(res.pyerr)
File "/home/mwaskom/anaconda/lib/python2.7/site-packages/IPython/parallel/client/asyncresult.py", line 267, in __getattr__
self.__class__.__name__, key))
AttributeError: 'AsyncMapResult' object has no attribute 'pyerr'
Also in general the parallel machinery should be updated to use the right namespace.
i.e. deep white matter/csf. Should probably do some kind of PCA, but also not really any feasible way to cross-validate the number of components as is done in GLMdenoise. 6 sounds like a good round number? Then also add a way to use these as confounds in the model.
could be a different solution to parameters that vary across subjects, allow people to write experiment files assigned to subjects that inherit from the main experiment but override some values. could be tricky to make work with altmodels. would also require a restructuring of the way the workflows work to fully inject the experiment info as a workflow variable, probably using a new kind of interface that accesses experiment data through output ports. overall seems tricky but maybe useful.
I wanted to get this in so I could cache the permutations, but the fix needs to be broader
Streamline that
Hi,
I was getting an error with the command:
run_fmri.py -w model -altmodel non
Below is the error message:
150913-22:21:36,662 workflow INFO:
Traceback (most recent call last):
File "/Users/cnh/anaconda/lib/python2.7/site-packages/nipype-0.10.0-py2.7.egg/nipype/pipeline/plugins/multiproc.py", line 18, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/Users/cnh/anaconda/lib/python2.7/site-packages/nipype-0.10.0-py2.7.egg/nipype/pipeline/engine.py", line 1424, in run
self._run_interface()
File "/Users/cnh/anaconda/lib/python2.7/site-packages/nipype-0.10.0-py2.7.egg/nipype/pipeline/engine.py", line 1534, in _run_interface
self._result = self._run_command(execute)
File "/Users/cnh/anaconda/lib/python2.7/site-packages/nipype-0.10.0-py2.7.egg/nipype/pipeline/engine.py", line 1660, in _run_command
result = self._interface.run()
File "/Users/cnh/anaconda/lib/python2.7/site-packages/nipype-0.10.0-py2.7.egg/nipype/interfaces/base.py", line 998, in run
runtime = self._run_interface(runtime)
File "/Users/cnh/anaconda/lib/python2.7/site-packages/lyman/workflows/model.py", line 190, in _run_interface
self.design_report(self.inputs.exp_info, X, design_kwargs)
File "/Users/cnh/anaconda/lib/python2.7/site-packages/lyman/workflows/model.py", line 270, in design_report
X.plot_confound_correlation(fname=corr_png, close=True)
File "/Users/cnh/anaconda/lib/python2.7/site-packages/moss/glm.py", line 590, in plot_confound_correlation
lgd = ax.legend(bars, self._confound_names, ncol=ncol, fontsize=10,
UnboundLocalError: local variable 'bars' referenced before assignment
It seems that the design matrix of the alternative model was not generated... Could you help me fix this error? Thanks.
When moving files to a new computer or User, the "preproc" directories are symbolically linked to the original base experiment preproc directory, rather than updating to the relative directory under the new User name.
Is there a way to tell lyman not to include motion regressors in the GLM?
Possibly not of great concern, but still not the correct behavior.
Change workflow creation from:
mfx, mfx_input, mfx_output = wf.create_volume_mixedfx_workflow(
subject_list=subject_list, regressors=regressors, contrasts=contrasts)
to
mfx, mfx_input, mfx_output = wf.create_volume_mixedfx_workflow(
name=args.output, subject_list=subject_list, regressors=regressors, contrasts=contrasts)
It seems that to use the inverse warp file generated by ANTs with WarpImageMultiTransform, the inverse warpfile must be named 'InverseWarp.nii.gz', or it fails the syntax check. You might consider renaming outputs from inverse_warp to InverseWarp in the next release. Then again, this may not be the case in newer versions of ANTs.
Add subject-level "gray matter" masks from the freesurfer segmentation, and use an average of these at the group stage.
Line 48 in surface_snapshots.py should be updated so that alternate z-thresholds could be plotted.
Current: sig_thresh = np.round(sig_thresh) * 10
New: sig_thresh = np.round(sig_thresh * 10)
Is there support in Lyman for if a subject is missing a run of data? (From, for example, a rather buggy mux implementation)
I'm making snapshots of a huge number of Freesurfer subjects (~1000). Every 100 or so, anatomy_snapshots.py is using so much percent of RAM (over 60%) that the server I am letting it run on becomes very unresponsive (we only have 16GB of RAM), so other people using it cannot work properly anymore and I have to quit and restart the execution of the script.
From monitoring its process with top, it seems as if memory usage is increasing with every subject. Do you have an idea what could be the reason for the accumulating memory use? (I don't use the -noclose flag, if this is important)
Fairly straightforward but have to figure out a few things. Basic idea is that it should be possible to use the model workflow to regress confounds out of the design and then register the residual timeseries to use for, e.g. decoding analyses.
I was getting an error about series not being hashable objects when running mvpa.py tools. It seems to have been fixed by changing ds_hash.update(ds["runs"]) to ds_hash.update(ds["runs"].data) in _hash_decoder().
I noticed that in order for the registration step of run_fmri.py to run properly it needs different output files from the run_warp.py stage (using fsl's fnirt). It needs e.g.
data/{subject_id}/normalization/affine.mat and
data/{subject_id}/normalization/warpfield.nii.gz
instead of
data/{subject_id}/normalization/T1_out_masked_flirt.mat
data/{subject_id}/normalization/T1_out_fieldwarp.nii.gz
Is that correct?
This line
lyman/lyman/workflows/preproc.py
Line 672 in 76fee0a
expects 5 values to be returned from color_palette("deep"), but the current version of sns returns 6 values, causing an error.
When you include values for regressor in the design file, does lyman create two regressors (the linear and constant term)? If so does it take care of orthogonalization? Or should I specify this myself?
I believe line 51 of workflows/preproc.py has a line that should be changed if the experiment details dictionary is to match the key expected.
That is, from "whole_brain" to "whole_brain_template".
Or else it complains the module inputs has no output called whole_brain_template
Cheers!
Not sure whether this is an issue or my wrongdoing or even whether this is the best place to ask it but I noticed the following: I ran the preprocessing without problems. The model (design.csv) consists of two conditions across three sessions. In the experiment file I've included the condition names (condition_names=['left','right']), corresponding to those in the design matrix.
After running run_fmri.py -s subjects.txt -w preproc model I get a /smoothed/model/run_# for each of the #runs. When I look at the design (design.mat/png) I see in this particular instance only the rot[x,y,z]. trans[x,y,z] and artifacts (two motion-based outliers). I don't see the actual columns corresponding to the design. Granted, it spits out two cope/zstat/etc files as expected but I'm not sure whether it's based on the model I want (effects of interest + confounds) or whether it is the model I expect, based on the design matrix (i.e. confounds only).
The reason I'd expect at least the baseline contrasts (i.e. left vs. baseline and right vs. baseline) is the following line in the documentation: "If you provided a list of condition names, baseline contrasts are automatically generated for each of these conditions and prepended to this list. " (see http://stanford.edu/~mwaskom/software/lyman/experiments.html#detailed-design-information -> contrasts)
Am I overlooking something?
Great work btw!
Now that timeseries regressors are implemented, it'd be nice if a GLM could be run without requiring the presence of conditions.
One hacky way around this currently is to create a dummy impulse condition at the last TR, but this, of course, is god-awful. :)
For consistency.
It would be helpful to maintain compatibility between versions such they return the same SNR maps
Shouldn't have to do this twice for decoding and evoked analysis, also shouldn't have to do it twice for different summary statistics.
run_fmri.py fails when the crashdir cannot be made due to os.getlogin() returns:
[Errno 2] no such file or directory.
I'm calling run_fmri from a remote desktop (rdesktop with actual username and passwd) but apparently it still fails on os.getlogin(). It would be helpful if there was a crash-dir in the same way as there is a working-dir and analysis-dir (through e.g. setup_project.py).
If I had admin rights I could change it locally but I imagine others facing the same problem....
Cheers,
Cris
In the fixedfx script, FALMEO is called by building a command line function:
line 163
# Build the flamo commandline and run
flamecmd = ["flameo",
"--cope=cope_4d.nii.gz",
"--varcope=varcope_4d.nii.gz",
"--mask=mask.nii.gz",
"--dvc=dof.nii.gz",
"--runmode=fe",
"--dm=design.mat",
"--tc=design.con",
"--cs=design.grp",
"--ld=" + contrast,
"--npo"]
But in the mixedfx script its called by using a nipype node:
line 74
# Fit the mixed effects model
flameo = Node(fsl.FLAMEO(run_mode=exp_info["flame_mode"]), "flameo")
line 167
(mergecope, flameo,
[("merged_file", "cope_file")]),
(mergevarcope, flameo,
[("merged_file", "var_cope_file")]),
(mergevarcope, makemask,
[("merged_file", "varcope_file")]),
(mergedof, flameo,
[("merged_file", "dof_var_cope_file")]),
(makemask, flameo,
[("mask_file", "mask_file")]),
(design, flameo,
[("design_con", "t_con_file"),
("design_grp", "cov_split_file"),
("design_mat", "design_file")]),
Why the difference in style?
I'm trying to figure out the best way to run a with-in subject fixed effect model, but he nipype syntax makes a lot more sense to me.
And will likely break with > 9 runs (or at least be ugly)
I am using anaconda. I installed all denpendencies where available using "conda", the others using "pip".
Still I get the "missing dependencies: seaborn" error when installing lyman (or moss)
Which is a problem if you want to get the ffx tsnr maps.
Hey Michael,
I noticed that in your bbregister you first do a rough skullstrip with BET. The default value for the fractional intensity threshold, .5, is in my experience too high for T2 data. It sometimes results in big chunks of PFC getting cut off. I'm getting (very slight) improvements in my registrations if I run it with a lower threshold (.1).
-Ian
When the subjects.txt file contains just one subject, run_fmri.py treats each character of the sub id as a subject. Works fine when there are at least 2 subjects.
is it possible to add custom confounds (eg, phsyio data, signal derived from ventricles/WM, or noise components from a GLMdenoise approach). I see that I could add this in a regressors file, but it would make the R^2 maps less informative.
The scale is off, it seems about twice as large as the model-level SNR, but the topography looks correct. Possibly still not using the correctly scaled mean image...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.