Coder Social home page Coder Social logo

awesome-neuro-ai-papers's Introduction



Awesome NeuroAI Papers Awesome

A curated list of Papers & Reviews from the intersection of deep learning and neuroscience

This list is providing an overview of recent publications connecting neuroscience & computer science research. As both fields are growing rapidly this list is only presenting a small subset of relevant papers. In case important papers are missing please send a pull request.

Papers

Millet, J., Caucheteux, C., Orhan, P., Boubenec, Y., Gramfort, A., Dunbar, E., ... & King, J. R. Toward a realistic model of speech processing in the brain with self-supervised learning arXiv (2022)

Sucevic, J., & Schapiro, A. C. A neural network model of hippocampal contributions to category learning bioRxiv (2022)

Bakhtiari, S., Mineault, P., Lillicrap, T., Pack, C., & Richards, B. The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning NeurIPS (2021)

Conwell, C., Mayo, D., Barbu, A., Buice, M., Alvarez, G., & Katz, B. Neural regression, representational similarity, model zoology & neural taskonomy at scale in rodent visual cortex NeurIPS (2021)

Krotov, Dmitry. Hierarchical associative memory arXiv (2021)

Krotov, Dmitry, and John Hopfield. Large associative memory problem in neurobiology and machine learning ICLR (2021)

Whittington, J. C., Warren, J., & Behrens, T. E. Relating transformers to models and neural representations of the hippocampal formation arXiv (2021)

Schrimpf, M., Blank, I. A., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., ... & Fedorenko, E. The neural architecture of language: Integrative modeling converges on predictive processing PNAS (2021)

Liang, Yuchen, Chaitanya K. Ryali, Benjamin Hoover, Leopold Grinberg, Saket Navlakha, Mohammed J. Zaki, and Dmitry Krotov. Can a Fruit Fly Learn Word Embeddings? ICLR (2021)

George, D., Rikhye, R. V., Gothoskar, N., Guntupalli, J. S., Dedieu, A., & Lázaro-Gredilla, M. Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps Nature communications (2021)

Whittington, J. C., Muller, T. H., Mark, S., Chen, G., Barry, C., Burgess, N., & Behrens, T. E. The Tolman-Eichenbaum machine: Unifying space and relational memory through generalization in the hippocampal formation Cell (2020)

Banino, A., Badia, A. P., Köster, R., Chadwick, M. J., Zambaldi, V., Hassabis, D. & Blundell, C. Memo: A deep network for flexible combination of episodic memories arXiv (2020)

Chengxu Zhuang, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo, Daniel L. K. Yamins Unsupervised Neural Network Models of the Ventral Visual Stream bioRxiv (2020)

Tyler Bonnen, Daniel L.K. Yaminsa, Anthony D. Wagner When the ventral visual stream is not enough: A deep learning account of medial temporal lobe involvement in perception bioRxiv (2020)

Kim, K., Sano, M., De Freitas, J., Haber, N., & Yamins, D. Active World Model Learning with Progress Curiosity arXiv (2020)

Guangyu Robert Yang, Xiao-Jing Wang Artificial Neural Networks for Neuroscientists: A Primer Neuron (2020)

Glaser G.I., Benjamin, S.A., Chowdhury, H.R., Perich G.M., Miller, L.E., Kording, K.P. Machine Learning for Neural Decoding eNeuro (2020)

Jones, I. S., & Kording, K. P. Can Single Neurons Solve MNIST? The Computational Power of Biological Dendritic Trees arXiv (2020)

Rolnick, D., & Kording, K. Reverse-engineering deep ReLU networks ICML (2020)

Geirhos, R., Narayanappa, K., Mitzkus, B., Bethge, M., Wichmann, F. A., & Brendel, W. On the surprising similarities between supervised and self-supervised models arXiv (2020)

Storrs, K. R., Kietzmann, T. C., Walther, A., Mehrer, J., & Kriegeskorte, N. Diverse deep neural networks all predict human IT well, after training and fitting bioRxiv (2020)

Yonatan Sanz Perl, Hernán Boccacio, Ignacio Pérez-Ipiña, Federico Zamberlán, Helmut Laufs, Morten Kringelbach, Gustavo Deco, Enzo Tagliazucchi Generative embeddings of brain collective dynamics using variational autoencoders arXiv (2020)

George, D., Lazaro-Gredilla, M., Lehrach, W., Dedieu, A., & Zhou, G. A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model bioRxiv (2020)

van Bergen, R. S., & Kriegeskorte, N. Going in circles is the way forward: the role of recurrence in visual inference arXiv (2020)

Joseph G. Makin, David A. Moses, Edward F. Chang Machine translation of cortical activity to text with an encoder–decoder framework Nature Neuroscience (2020)

Richards, B. A., & Lillicrap, T. P. Dendritic solutions to the credit assignment problem Current opinion in neurobiology (2019)

Sinz, F. H., Pitkow, X., Reimer, J., Bethge, M., & Tolias, A. S. Engineering a less artificial intelligence Neuron (2019)

Kubilius, J., Schrimpf, M., Kar, K., Rajalingham, R., Hong, H., Majaj, N. & DiCarlo, J. J. Brain-like object recognition with high-performing shallow recurrent ANNs Advances in Neural Information Processing Systems (2019)

Barrett, D. G., Morcos, A. S., & Macke, J. H. Analyzing biological and artificial neural networks: challenges with opportunities for synergy? Current opinion in neurobiology (2019)

Stringer, C., Pachitariu, M., Steinmetz, N., Carandini, M., & Harris, K. D. High-dimensional geometry of population responses in visual cortex Nature (2019)

Beniaguev David, Segev Idan, London Michael Single Cortical Neurons as Deep Artificial Neural Networks bioRxiv (2019)

Krotov, D. & Hopfield, J.J. Unsupervised learning by competing hidden units PNAS (2019)

Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass A solution to the learning dilemma for recurrent 2 networks of spiking neurons bioRxiv (2019)

Albert Gidon, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, Matthew Evan Larkum Dendritic action potentials and computation in human layer 2/3 cortical neurons Science (2019)

Adam Gaier, David Ha Weight Agnostic Neural Networks arXiv (2019)

Ben Sorscher, Gabriel C. Mel, Surya Ganguli, Samuel A. Ocko A unified theory for the origin of grid cells through the lens of pattern formation NeurIPS (2019)

Sara Hooker, Aaron Courville, Yann Dauphin, Andrea Frome Selective Brain Damage: Measuring the Disparate Impact of Model Pruning arXiv (2019)

Walker, E. Y., Sinz, F. H., Cobos, E., Muhammad, T., Froudarakis, E., Fahey, P. G. & Tolias, A. S. Inception loops discover what excites neurons most using deep predictive models Nature neuroscience (2019)

Alessio Ansuini, Alessandro Laio, Jakob H. Macke, Davide Zoccolan Intrinsic dimension of data representations in deep neural networks arXiv (2019)

Josh Merel, Diego Aldarondo, Jesse Marshall, Yuval Tassa, Greg Wayne, Bence Ölveczky Deep neuroethology of a virtual rodent arXiv (2019)

Zhe Li, Wieland Brendel, Edgar Y. Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian H. Sinz, Xaq Pitkow, Andreas S. Tolias Learning From Brains How to Regularize Machines arXiv (2019)

Hidenori Tanaka, Aran Nayebi, Niru Maheswaranathan, Lane McIntosh, Stephen Baccus, Surya Ganguli From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction NeurIPS (2019)

Stefano Recanatesi, Matthew Farrell ,Guillaume Lajoie, Sophie Deneve, Mattia Rigotti, and Eric Shea-Brown Predictive learning extracts latent space representations from sensory observations BiorXiv (2019)

Nasr, Khaled, Pooja Viswanathan, and Andreas Nieder. Number detectors spontaneously emerge in a deep neural network designed for visual object recognition. Science Advances (2019)

Bashivan, Pouya, Kohitij Kar, and James J. DiCarlo. Neural population control via deep image synthesis. Science (2019)

Ponce, Carlos R., Will Xiao, Peter F. Schade, Till S. Hartmann, Gabriel Kreiman, and Margaret S. Livingstone. Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences Cell (2019)

Kar, Kohitij, Jonas Kubilius, Kailyn M. Schmidt, Elias B. Issa, and James J. DiCarlo. Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nature Neuroscience (2019)

Russin, Jake, Jason Jo, and Randall C. O'Reilly. Compositional generalization in a deep seq2seq model by separating syntax and semantics. arXiv (2019)

Rajalingham, Rishi, Elias B. Issa, Pouya Bashivan, Kohitij Kar, Kailyn Schmidt, and James J. DiCarlo. Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks. Journal of Neuroscience (2018)

Eslami, SM Ali, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S. Morcos, Marta Garnelo, Avraham Ruderman et al. Neural scene representation and rendering. Science (2018)

Banino, Andrea, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel et al. Vector-based navigation using grid-like representations in artificial agents. Nature (2018)

Schrimpf, Martin, Kubilius, Jonas, Hong, Ha, Majaj, Najib J., Rajalingham, Rishi, Issa, Elias B., Kar, Kohitij, Bashivan, Pouya, Prescott-Roy, Jonathan, Geiger, Franziska, Schmidt, Kailyn, Yamins, Daniel L. K., and DiCarlo, James J. Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like? bioRxiv (2018)

Kell, A. J., Yamins, D. L., Shook, E. N., Norman-Haignere, S. V., & McDermott, J. H. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy Neuron (2018)

Guerguiev, Jordan, Timothy P. Lillicrap, and Blake A. Richards. Towards deep learning with segregated dendrites. ELife (2017).

Kanitscheider, I., & Fiete, I. Training recurrent networks to generate hypotheses about how the brain solves hard navigation problems arXiv (2017)

George, D., Lehrach, W., Kansky, K., Lázaro-Gredilla, M., Laan, C., Marthi, B., ... & Phoenix, D. S. A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs Science (2017)

Bengio, Yoshua, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard, and Zhouhan Lin. Towards biologically plausible deep learning. arXiv (2015).

Güçlü, Umut, and Marcel AJ van Gerven. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience (2015)

Cadieu, Charles F., Ha Hong, Daniel LK Yamins, Nicolas Pinto, Diego Ardila, Ethan A. Solomon, Najib J. Majaj, and James J. DiCarlo. Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS computational biology (2014)

Reviews

Lindsay, G. W. Convolutional neural networks as a model of the visual system: Past, present, and future arXiv (2021)

Hasselmo, M. E., Alexander, A. S., Hoyland, A., Robinson, J. C., Bezaire, M. J., Chapman, G. W., ... & Dannenberg, H. The Unexplored Territory of Neural Models: Potential Guides for Exploring the Function of Metabotropic Neuromodulation Neuroscience (2021)

Bermudez-Contreras, E., Clark, B.J., Wilber, A. The Neuroscience of Spatial Navigation and the Relationship to Artificial Intelligence Front. Comput. Neurosci. (2020)

Botvinick, M., Wang, J.X., Dabney, W., Miller, K.J., Kurth-Nelson, Z. Deep Reinforcement Learning and Its Neuroscientific Implications Neuron (2020)

Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J. & Hinton, G. Backpropagation and the brain Nature Reviews Neuroscience, (2020)

Saxe, A., Nelli, S. & Summerfield, C. If deep learning is the answer, then what is the question? arXiv, (2020)

Hasson, U., Nastase, S. A., & Goldstein, A. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks. Neuron (2020)

Schrimpf, M., Kubilius, J., Lee, M. J., Ratan Murty, N. A., Ajemian, R., & DiCarlo, J. J. Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence. Neuron (2020)

Storrs, K. R., & Kriegeskorte, N. Deep learning for cognitive neuroscience. arXiv (2019)

Zador, M.Z. A critique of pure learning and what artificial neural networks can learn from animal brains, Nature Communications, (2019)

Richards, Blake A., Timothy P. Lillicrap, Philippe Beaudoin, Yoshua Bengio, Rafal Bogacz, Amelia Christensen, Claudia Clopath et al. A deep learning framework for neuroscience. Nature neuroscience (2019)

Kietzmann, T. C., McClure, P., & Kriegeskorte, N. (2018). Deep neural networks in computational neuroscience BioRxiv, (2018)

Hassabis, Demis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscience-inspired artificial intelligence. Neuron (2017)

Lake, Brenden M., Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. Behavioral and brain sciences (2017).

Marblestone, Adam H., Greg Wayne, and Konrad P. Kording. Toward an integration of deep learning and neuroscience. Frontiers in computational neuroscience (2016)

Blogs

Mineault, Patrick What’s the endgame of neuroAI? (2022)

Mineault, Patrick Unsupervised models of the brain (2021)

Dettmers, Tim The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near (2015)

More awesome lists

Anna Wolff, Martin Hebart. DNN vs. Brain and Behavior

Francesco Innocenti. Neuro-AI papers

awesome-neuro-ai-papers's People

Contributors

cyhsm avatar dimakrotov avatar mschrimpf avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.