Research Article: Fragmented ambiguous objects: Stimuli with stable low-level features for object recognition tasks

Date Published: April 11, 2019

Publisher: Public Library of Science

Author(s): Cheryl A. Olman, Tori Espensen-Sturges, Isaac Muscanto, Julia M. Longenecker, Philip C. Burton, Andrea N. Grant, Scott R. Sponheim, Joseph Najbauer.

http://doi.org/10.1371/journal.pone.0215306

Abstract

Visual object recognition is a complex skill that relies on the interaction of many spatially distinct and specialized visual areas in the human brain. One tool that can help us better understand these specializations and interactions is a set of visual stimuli that do not differ along low-level dimensions (e.g., orientation, contrast) but do differ along high-level dimensions, such as whether a real-world object can be detected. The present work creates a set of line segment-based images that are matched for luminance, contrast, and orientation distribution (both for single elements and for pair-wise combinations) but result in a range of object and non-object percepts. Image generation started with images of isolated objects taken from publicly available databases and then progressed through 3-stages: a computer algorithm generating 718 candidate images, expert observers selecting 217 for further consideration, and naïve observers performing final ratings. This process identified a set of 100 images that all have the same low-level properties but cover a range of recognizability (proportion of naïve observers (N = 120) who indicated that the stimulus “contained a known object”) and semantic stability (consistency across the categories of living, non-living/manipulable, and non-living/non-manipulable when the same observers named “known” objects). Stimuli are available at https://github.com/caolman/FAOT.git.

Partial Text

A dominant theme in visual neuroscience is that visual perception is accomplished by iterative computations in a hierarchical visual system that are modulated by behavioral goals and context. Thus, visual representations of objects are influenced by non-visual representations, such as task-dependent allocation of attention, which are computed in non-visual regions. Many studies indicate that visual computations are also altered by conditions such as psychosis [1, 2], autism [3], and aging [4], but it is not known exactly how these conditions affect neural mechanisms in the brain. We still need better methods for determining, in each case, whether the visual system is affected at a low level (retina, thalamus and primary visual cortex), an intermediate level (visual areas with retinotopic organization but selectivity for features of intermediate complexity [5]), a high level (object recognition regions), or in its ability to interact with non-visual brain regions. A quantitative approach to this problem requires careful control of visual stimulus features, since visual stimuli provide such a strong feed-forward drive to the brain that small changes to low-level features can have large effects that propagate throughout the system [6, 7].

Fig 4A summarizes the responses by 120 external observers to the 217 images presented in the Yes/No and Naming tasks. In general, recognizability was associated with stability (see below for statistical assessment of this association in the final set of 100 images). Objects perceived as both living and non-living are found at the high and low end of recognizability and stability. No image received a score lower than 0.1 or higher than 0.9 on recognizability. On the high end, this is presumably because of finger errors or lapses in attention. On the low end, this is presumably because observers can idiosyncratically recognize objects in even the messiest scenes (like seeing shapes in clouds).

This novel stimulus set is useful for studying object recognition processes while keeping low-level visual information constant (i.e., independent of feature grouping or semantic content). The full package, available at https://github.umn.edu/caolman/FAOT.git, includes 100 fractured ambiguous object stimuli, documents and images indicating statistics of the stimuli and source images, and PsychoPy [46] code for presenting two tasks: the Yes/No task and the Naming task.

This work resulted in a set of 100 stimuli that can be used to probe object recognition while controlling the following low-level features: luminance, contrast, orientation distribution and number of elements contributing to potential object perception.

 

Source:

http://doi.org/10.1371/journal.pone.0215306

 

Leave a Reply

Your email address will not be published.