Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Rubinov, Mika
2023.
Circular and unified analysis in network neuroscience.
eLife,
Vol. 12,
Issue. ,
Lu, Zhuo
2023.
Research on Stock History Data Mining and Prediction Algorithm Based on Long Short-Term Memory Network.
p.
1.
Doerig, Adrien
Sommers, Rowan P.
Seeliger, Katja
Richards, Blake
Ismael, Jenann
Lindsay, Grace W.
Kording, Konrad P.
Konkle, Talia
van Gerven, Marcel A. J.
Kriegeskorte, Nikolaus
and
Kietzmann, Tim C.
2023.
The neuroconnectionist research programme.
Nature Reviews Neuroscience,
Vol. 24,
Issue. 7,
p.
431.
Lin, Chujun
Bulls, Landry S.
Tepfer, Lindsey J.
Vyas, Amisha D.
and
Thornton, Mark A.
2023.
Advancing Naturalistic Affective Science with Deep Learning.
Affective Science,
Vol. 4,
Issue. 3,
p.
550.
Lindeberg, Tony
2023.
Covariance properties under natural image transformations for the generalised Gaussian derivative model for visual receptive fields.
Frontiers in Computational Neuroscience,
Vol. 17,
Issue. ,
Walther, Dirk B.
Farzanfar, Delaram
Han, Seohee
and
Rezanejad, Morteza
2023.
The mid-level vision toolbox for computing structural properties of real-world images.
Frontiers in Computer Science,
Vol. 5,
Issue. ,
van Dyck, Leonard Elia
and
Gruber, Walter Roland
2023.
Modeling Biological Face Recognition with Deep Convolutional Neural Networks.
Journal of Cognitive Neuroscience,
Vol. 35,
Issue. 10,
p.
1521.
Clark, Kevin B.
2023.
Neural Field Continuum Limits and the Structure–Function Partitioning of Cognitive–Emotional Brain Networks.
Biology,
Vol. 12,
Issue. 3,
p.
352.
Finn, Emily S.
Poldrack, Russell A.
and
Shine, James M.
2023.
Functional neuroimaging as a catalyst for integrated neuroscience.
Nature,
Vol. 623,
Issue. 7986,
p.
263.
Bowers, Jeffrey S.
Malhotra, Gaurav
Adolfi, Federico
Dujmović, Marin
Montero, Milton L.
Biscione, Valerio
Puebla, Guillermo
Hummel, John H.
and
Heaton, Rachel F.
2023.
On the importance of severely testing deep learning models of cognition.
Cognitive Systems Research,
Vol. 82,
Issue. ,
p.
101158.
Wichmann, Felix A.
and
Geirhos, Robert
2023.
Are Deep Neural Networks Adequate Behavioral Models of Human Visual Perception?.
Annual Review of Vision Science,
Vol. 9,
Issue. 1,
p.
501.
Nadler, Ethan O.
Darragh-Ford, Elise
Desikan, Bhargav Srinivasa
Conaway, Christian
Chu, Mark
Hull, Tasker
and
Guilbeault, Douglas
2023.
Divergences in color perception between deep neural networks and humans.
Cognition,
Vol. 241,
Issue. ,
p.
105621.
Yang, Fumeng
Ma, Yuxin
Harrison, Lane
Tompkin, James
and
Laidlaw, David H.
2023.
How Can Deep Neural Networks Aid Visualization Perception Research? Three Studies on Correlation Judgments in Scatterplots.
p.
1.
Li, Qiang
2023.
Saliency prediction based on multi-channel models of visual processing.
Machine Vision and Applications,
Vol. 34,
Issue. 4,
Biscione, Valerio
and
Bowers, Jeffrey S.
2023.
Mixed Evidence for Gestalt Grouping in Deep Neural Networks.
Computational Brain & Behavior,
Vol. 6,
Issue. 3,
p.
438.
Schiatti, Lucia
Gori, Monica
Schrimpf, Martin
Cappagli, Giulia
Morelli, Federica
Signorini, Sabrina
Katz, Boris
and
Barbu, Andrei
2023.
Modeling Visual Impairments with Artificial Neural Networks: a Review.
p.
1979.
Gu, Zijin
Jamison, Keith
Sabuncu, Mert R.
and
Kuceyeski, Amy
2023.
Human brain responses are modulated when exposed to optimized natural images or synthetically generated images.
Communications Biology,
Vol. 6,
Issue. 1,
Westfall, Mason
2023.
Toward biologically plausible artificial vision.
Behavioral and Brain Sciences,
Vol. 46,
Issue. ,
Gaya-Morey, F. Xavier
Ramis-Guarinos, Silvia
Manresa-Yee, Cristina
and
Buades-Rubio, José M.
2024.
Unveiling the human-like similarities of automatic facial expression recognition: An empirical exploration through explainable ai.
Multimedia Tools and Applications,
Vol. 83,
Issue. 38,
p.
85725.
Gigerenzer, Gerd
2024.
Psychological AI: Designing Algorithms Informed by Human Psychology.
Perspectives on Psychological Science,
Vol. 19,
Issue. 5,
p.
839.
Target article
Deep problems with neural network models of human vision
Related commentaries (29)
Explananda and explanantia in deep neural network models of neurological network functions
A deep new look at color
Beyond the limitations of any imaginable mechanism: Large language models and psycholinguistics
Comprehensive assessment methods are key to progress in deep learning
Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses
Even deeper problems with neural network models of language
Fixing the problems of deep neural networks will require better training data and learning algorithms
For deep networks, the whole equals the sum of the parts
For human-like models, train on human-like tasks
Going after the bigger picture: Using high-capacity models to understand mind and brain
Implications of capacity-limited, generative models for human vision
Let's move forward: Image-computable models and a common model evaluation scheme are prerequisites for a scientific understanding of human vision
Modelling human vision needs to account for subjective experience
Models of vision need some action
My pet pig won't fly and I want a refund
Neither hype nor gloom do DNNs justice
Neural networks need real-world behavior
Neural networks, AI, and the goals of modeling
Perceptual learning in humans: An active, top-down-guided process
Psychophysics may be the game-changer for deep neural networks (DNNs) to imitate the human vision
Statistical prediction alone cannot identify good models of behavior
The model-resistant richness of human visual experience
The scientific value of explanation and prediction
There is a fundamental, unbridgeable gap between DNNs and the visual cortex
Thinking beyond the ventral stream: Comment on Bowers et al.
Using DNNs to understand the primate vision: A shortcut or a distraction?
Where do the hypotheses come from? Data-driven learning in science and the brain
Why psychologists should embrace rather than abandon DNNs
You can't play 20 questions with nature and win redux
Author response
Clarifying status of DNNs as models of human vision