Coder Social home page Coder Social logo

result_caching's Introduction

Build Status Documentation Status Contributor Covenant

Brain-Score is a platform to evaluate computational models of brain function on their match to brain measurements in primate vision. The intent of Brain-Score is to adopt many (ideally all) the experimental benchmarks in the field for the purpose of model testing, falsification, and comparison. To that end, Brain-Score operationalizes experimental data into quantitative benchmarks that any model candidate following the BrainModel interface can be scored on.

Note that you can only access a limited set of public benchmarks when running locally. To score a model on all benchmarks, submit it via the brain-score.org website.

See the documentation for more details, e.g. for submitting a model or benchmark to Brain-Score. For a step-by-step walkthrough on submitting models to the Brain-Score website, see these web tutorials.

See these code examples on scoring models, retrieving data, using and defining benchmarks and metrics. These previous examples might be helpful, but their usage has been deprecated after the 2.0 update.

Brain-Score is made by and for the community. To contribute, please send in a pull request.

Local installation

You will need Python = 3.7 and pip >= 18.1.

pip install git+https://github.com/brain-score/vision

Test if the installation is successful by scoring a model on a public benchmark:

from brainscore_vision.benchmarks import public_benchmark_pool

benchmark = public_benchmark_pool['dicarlo.MajajHong2015public.IT-pls']
model = my_model()
score = benchmark(model)

# >  <xarray.Score ()>
# >  array(0.07637264)
# >  Attributes:
# >  Attributes:
# >      error:                 <xarray.Score ()>\narray(0.00548197)
# >      raw:                   <xarray.Score ()>\narray(0.22545106)\nAttributes:\...
# >      ceiling:               <xarray.DataArray ()>\narray(0.81579938)\nAttribut...
# >      model_identifier:      my-model
# >      benchmark_identifier:  dicarlo.MajajHong2015public.IT-pls

Some steps may take minutes because data has to be downloaded during first-time use.

Environment Variables

Variable Description
RESULTCACHING_HOME directory to cache results (benchmark ceilings) in, ~/.result_caching by default (see https://github.com/brain-score/result_caching)

License

MIT license

Troubleshooting

`ValueError: did not find HDF5 headers` during netcdf4 installation pip seems to fail properly setting up the HDF5_DIR required by netcdf4. Use conda: `conda install netcdf4`
repeated runs of a benchmark / model do not change the outcome even though code was changed results (scores, activations) are cached on disk using https://github.com/mschrimpf/result_caching. Delete the corresponding file or directory to clear the cache.

CI environment

Add CI related build commands to test_setup.sh. The script is executed in CI environment for unittests.

References

If you use Brain-Score in your work, please cite "Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?" (technical) and "Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence" (perspective) as well as the respective benchmark sources.

@article{SchrimpfKubilius2018BrainScore,
  title={Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?},
  author={Martin Schrimpf and Jonas Kubilius and Ha Hong and Najib J. Majaj and Rishi Rajalingham and Elias B. Issa and Kohitij Kar and Pouya Bashivan and Jonathan Prescott-Roy and Franziska Geiger and Kailyn Schmidt and Daniel L. K. Yamins and James J. DiCarlo},
  journal={bioRxiv preprint},
  year={2018},
  url={https://www.biorxiv.org/content/10.1101/407007v2}
}

@article{Schrimpf2020integrative,
  title={Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence},
  author={Schrimpf, Martin and Kubilius, Jonas and Lee, Michael J and Murty, N Apurva Ratan and Ajemian, Robert and DiCarlo, James J},
  journal={Neuron},
  year={2020},
  url={https://www.cell.com/neuron/fulltext/S0896-6273(20)30605-X}
}

result_caching's People

Contributors

franzigeiger avatar mike-ferguson avatar mschrimpf avatar p-mc-grath avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

result_caching's Issues

results_caching raises FileNotFoundError on small percentage of runs, seemingly at random

I'm testing a large number of models on Brain Score. I find that on a small percentage of my runs, results_caching raises FileNotFoundError. The error is in general reproducible, but any particular failure is not reproducible, suggesting that the failure is probabilistic.

Traceback (most recent call last):
  File "scripts/compute_eigenspectra_and_fit_encoding_model.py", line 94, in <module>
    dim_reduc_str=wandb_config['dim_reduc'])
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regression_dimensionality/utils.py", line 123, in fit_encoder
    score = model_scores(benchmark=benchmark, layers=[layer], prerun=True)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/model_tools/brain_transformation/neural.py", line 108, in __call__
    model=self._activations_model, benchmark=benchmark, layers=layers, prerun=prerun)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/result_caching/__init__.py", line 312, in wrapper
    result = function(**reduced_call_args)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/model_tools/brain_transformation/neural.py", line 122, in _call
    score = benchmark(layer_model)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/brainscore/utils/__init__.py", line 80, in __call__
    return self.content(*args, **kwargs)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/brainscore/benchmarks/_neural_common.py", line 26, in __call__
    source_assembly = candidate.look_at(stimulus_set, number_of_trials=self._number_of_trials)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/model_tools/brain_transformation/neural.py", line 143, in look_at
    self._model(layers=self._layers, stimuli=stimuli)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/model_tools/activations/pytorch.py", line 41, in __call__
    return self._extractor(*args, **kwargs)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/model_tools/activations/core.py", line 41, in __call__
    return self.from_stimulus_set(stimulus_set=stimuli, layers=layers, stimuli_identifier=stimuli_identifier)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/model_tools/activations/core.py", line 55, in from_stimulus_set
    activations = self.from_paths(stimuli_paths=stimuli_paths, layers=layers, stimuli_identifier=stimuli_identifier)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/model_tools/activations/core.py", line 73, in from_paths
    activations = fnc(layers=layers, stimuli_paths=reduced_paths)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/result_caching/__init__.py", line 318, in wrapper
    self.save(result, function_identifier)
  File "/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/regdim_venv/lib/python3.7/site-packages/result_caching/__init__.py", line 125, in save
    os.rename(savepath_part, path)
FileNotFoundError: [Errno 2] No such file or directory: '/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/.result_caching/model_tools.activations.core.ActivationsExtractorHelper._from_paths_stored/identifier=architecture:RF-8000-cosine-bernoulli-b-ns|task:None|kind:Rand|source:RS|lyr:mlp|agg:none,stimuli_identifier=dicarlo.hvm-public.pkl.filepart' -> '/home/gridsan/rschaeffer/FieteLab-Reg-Eff-Dim/.result_caching/model_tools.activations.core.ActivationsExtractorHelper._from_paths_stored/identifier=architecture:RF-8000-cosine-bernoulli-b-ns|task:None|kind:Rand|source:RS|lyr:mlp|agg:none,stimuli_identifier=dicarlo.hvm-public.pkl' 

I do have many processes (i.e. different SLURM jobs) running in parallel. Could that be causing this error?

computing missing prints too much

The computing missing method

self._logger.debug(f"Computing missing: {reduced_call_args}")

currently prints the details of every single parameter to the function.

This results in very long logs, for instance from http://braintree.mit.edu:8080/job/run_benchmarks/1149/console:

2021-09-14 12:03:11,396 DEBUG:result_caching._XarrayStorage:Computing missing: {'self': <model_tools.activations.core.ActivationsExtractorHelper object at 0x2ad892b9e990>, 'identifier': 'voneresnet50_at-pca_1000', 'stimuli_identifier': 'dicarlo.hvm-public-trial048', 'stimuli_paths': ['/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/_19_flyingBoat_rx+27.966_ry+88.590_rz+79.439_tx+00.080_ty-00.121_s+01.171_6c042254d9a861e8b91e0118ff22504a7e95e7af_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/face0008_rx+05.323_ry-10.518_rz-42.807_tx-00.294_ty+00.039_s+00.694_48c28a979b99b0737ac58f9abddc505d25764a9b_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/_18_rx+00.000_ry+00.000_rz+00.000_tx+00.000_ty+00.000_s+00.700_27cb8ad72145e4300b30fd1651ed46c7c5e0658a_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/MQUEEN_L_rx+12.494_ry-25.976_rz+02.613_tx-00.267_ty-00.302_s+01.419_27ff4c8ca44e4ffac7f23419aa9f9e574fe3cb37_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/f16_rx+43.100_ry+03.839_rz+03.918_tx-00.026_ty+00.551_s+01.658_8c2735a7d3e1cce9d170b5280722795e785949d4_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/bmw325_rx+00.000_ry+00.000_rz+00.000_tx+00.000_ty+00.000_s+01.000_7e757bc5d882e1c7b12d271a9339d0b9ece2e776_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/Peach_obj_rx-104.323_ry+20.688_rz-29.686_tx+00.115_ty-00.255_s+00.766_dad3834669f6e4e0717460eec93f815da024ddc5_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/bmw325_rx-20.248_ry+05.762_rz-19.844_tx+00.243_ty+00.448_s+01.257_e571a60810c0a67eae4f36ce3f70073549d8e9f6_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/Apple_Fruit_obj_rx-61.345_ry+05.376_rz-20.630_tx-00.019_ty+00.385_s+00.670_650dba08d5a412d5dc90b039a808767230542c36_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/alfa155_rx+00.000_ry+00.000_rz+00.000_tx+00.000_ty+00.000_s+01.000_649b6644656e7d09ae5432a3b8f9430063931aa6_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/_44_rx+19.121_ry+21.162_rz+44.744_tx+00.068_ty-00.054_s+00.554_96269b7a93815bb892a46250b192aba2ea012555_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/bear_rx+06.116_ry+70.721_rz-05.000_tx+00.157_ty+00.334_s+00.988_8b2e9047db00b3a03037bfa63b9dddf4220f1494_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/face0002_rx+00.000_ry+00.000_rz+00.000_tx+00.000_ty+00.000_s+00.800_db331b17a208794993be903a4435a62a3217f1a9_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/_014_rx-40.530_ry+30.249_rz+05.070_tx-00.217_ty+00.135_s+00.841_05391aefea027a1980dd2489b0497cd8fc51f823_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/watermelon_obj_rx-99.039_ry+31.128_rz+07.395_tx+00.166_ty-00.346_s+00.657_203ac099192e820f9c593263f5db1b6806299908_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/_11_rx+42.058_ry-02.310_rz+21.787_tx+00.016_ty-00.310_s+00.566_ea1dfac675dd4fd91b6a1ee01aef5ffb36a2816c_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/Strawberry_obj_rx-81.844_ry+30.249_rz+05.192_tx+00.071_ty-00.266_s+00.908_9ce9ffa3b4e08ee91c5b5488dd80a62973e7e280_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/Beetle_rx-04.623_ry-29.242_rz-28.209_tx+00.019_ty-00.496_s+01.059_f8c6379f0cf97ffbadc01b9bf26b4ef2bbdcd118_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/support_rx+25.788_ry+84.788_rz+20.597_tx-00.136_ty+00.076_s+01.492_53bdbd64d21df2fc954b8109443523f9425536cf_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/walnut_obj_rx-84.151_ry+33.137_rz+42.464_tx-00.231_ty-00.007_s+00.854_3785ef93fde528132402e7209b1df18642374434_256x256.png', '/nobackup/scratch/Fri/score_models_env_1149/.brainio/image_dicarlo_hvm-public/_38_rx-00.787_ry-
...

where it prints all the stimuli_paths.

Proposed solution: truncate very long parameters.

_XarrayStorage cannot handle general cases

_XarrayStorage does not support the merging between activations with different neuroid coords. E.g, one activation array with channel coord and another with channel_x and channel_y coords in addition.

Also, it is not clear how to extend the coord neuroid_num when the activation array of a new layer is added.

turning off result caching

Can we add a switch to turn on/off result caching?
As the ceiling score function saves the results in the same location for the same benchmark_identifier (because it doesn't see the model_identifier), it won't allow me to compute the ceiling scores for different models for the same benchmark_identifier.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.