, 2008a) were then used to screen for the best algorithms The re

, 2008a) were then used to screen for the best algorithms. The resulting algorithms

exceeded the performance of state-of-the-art computer vision models that had been carefully constructed over many years (Pinto et al., 2009b). These very large, instantiated algorithm spaces are now being used to design large-scale neurophysiological recording experiments that aim to winnow out progressively more accurate models of the ventral visual stream. Although great strides have been made in biologically inspired vision algorithms (e.g., Hinton and Salakhutdinov, 2006, Lecun et al., 2004, Riesenhuber and Poggio, 1999b, Serre et al., 2007b and Ullman and Bart, 2004), the distance between human and computational algorithm performance remains poorly understood because there is little agreement on what the benchmarks should be. For example, one promising object recognition this website algorithm is competitive with humans under Ribociclib in vivo short presentations (20 ms) and backward-masked conditions, but its performance is still far below unfettered, 200 ms human core recognition performance (Serre et al., 2007a). How can we ask whether an instantiated theory of primate object recognition is correct if

we do not have an agreed-upon definition of what “object recognition” is? Although we have given a loose definition (section 1), a practical definition that can drive progress must operationally boil down to a strategy for generating sets of visual images or movies and defined tasks that can be measured in behavior, Metalloexopeptidase neuronal populations, and bio-inspired algorithms. This

is easier said than done, as such tests must consider psychophysics, neuroscience, and computer vision; even supposed “natural, real-world” object recognition benchmarks do not easily distinguish between “state-of-the-art” computer vision algorithms and the algorithms that neuroscientists consider to be equivalent to a “null” model (e.g., performance of a crude model V1 population; Pinto et al., 2008b). Possible paths forward on the problem of benchmark tasks are outlined elsewhere (Pinto et al., 2009a and Pinto et al., 2008b), and the next steps require extensive psychophysical testing on those tasks to systematically characterize human abilities (e.g., Pinto et al., 2010 and Majaj et al., 2012). At a sociological level, progress has been challenged by the fact that the three most relevant research communities have historically been incentivized to focus on different objectives. Neuroscientists have focused on the problem of explaining the responses of individual neurons (e.g., Brincat and Connor, 2004 and David et al., 2006) or mapping the locations of those neurons in the brain (e.g., Tsao et al., 2003), and using neuronal data to find algorithms that explain human recognition performance has been only a hoped-for, but distant future outcome.

Comments are closed.