Tuesday, March 18, 2014

[Comp-neuro] Frontiers Computational Neuroscience call for papers on "Integrating computational and neural findings in visual object perception"

Frontiers in Computational Neuroscience, special issue on "Integrating computational and neural findings in visual object perception"

 

http://www.frontiersin.org/computational_neuroscience/researchtopics/integrating_computational_and_/1618

 

Abstract submission deadline: Apr 1st 2014 Full paper submission deadline: June 1st 2014

 

 

Topic editors:

Judith Peters                                                Maastricht University / The Netherlands Institute for Neuroscience, The Netherlands

Hans Op De Beeck                                      University of Leuven (KU Leuven), Belgium

Rainer Goebel                                              Maastricht University / The Netherlands Institute for Neuroscience, The Netherlands

 

 

The advent of new imaging techniques such as ultra-high field fMRI and advanced multi-unit recordings provide insights into large-scale neural activation patterns underlying visual object perception with unprecedented detail. In parallel, innovative neuroscientific applications of machine learning and computational methods, including various encoding and decoding methods (Miyawaki et al., 2008; Naselaris et al., 2011), representational similarity analysis (Kriegeskorte et al., 2008; Op de Beeck et al., 2001), and hyperalignment (Haxby et al., 2011) have been developed, offering novel opportunities to tackle longstanding questions concerning the neural correlates of object representations (e.g., DiCarlo et al., 2012; Marr, 1982; Biederman, 1987). Likewise, new techniques to integrate simulated and empirical data (e.g., Braet, Kubilius, Wagemans, & Op de Beeck, in press; Peters et al., 2012, 2010; Goebel & De Weerd 2009) allow direct testing of neural findings against predictions of various computational object perception models, which may facilitate the incremental, cross-fertilized development of both biologically-validated machine vision models as well as computationally interpretable empirical models of neural object perception.

 

The aim of this Research Topic is to provide a forum for state-of-the-art research integrating computational and empirical approaches to study the neural mechanisms underlying visual object perception as it is observed behaviorally in humans as well as other animals, including nonhuman primates and rodents. We welcome contributions addressing computational mechanisms underlying object perception related - but not limited - to key questions such as:

• What is the nature of neural object representations (e.g., the degree of invariance, sparseness) and how do these representations change across different processing stages?

• How are object features (e.g., Tanaka, 1996) computationally integrated into coherent object representations?

• How do object representations in the ventral visual system allow invariant recognition without losing specificity to distinguish between similar exemplars?

• How are object representations read-out by higher-order areas (e.g., hippocampus and prefrontal cortex), and how do current task instructions (e.g., between- versus within-category discrimination) or past experiences influence this read-out?

• How are object perception mechanisms shaped by experience and development?

• How can we optimally compare computer vision models to empirically-derived neural models based on human or animal object perception?

 

Pre-submission inquiries can be sent to Judith Peters ( j.peters@maastrichtuniversity.nl).

 

All articles become freely available upon publication and benefit from the unique marketing tools developed by Frontiers.

 

 

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Judith Peters (PhD)

 

Maastricht Brain Imaging Center (M-BIC), Maastricht University

Faculty of Psychology and Neuroscience, Maastricht University

Neuroimaging and Neuromodeling group, Netherlands Institute for Neuroscience, Amsterdam

 

No comments:

Post a Comment