Searching near and far: The attentional template incorporates viewing distance
Publication year
2024Number of pages
16 p.
Source
Journal of Experimental Psychology B-Human Perception and Performance, 50, 2, (2024), pp. 216-231ISSN
Publication type
Article / Letter to editor
Display more detailsDisplay less details
Organization
SW OZ DCC SMN
SW OZ DCC CO
Journal title
Journal of Experimental Psychology B-Human Perception and Performance
Volume
vol. 50
Issue
iss. 2
Languages used
English (eng)
Page start
p. 216
Page end
p. 231
Subject
Action, intention, and motor controlAbstract
According to theories of visual search, observers generate a visual representation of the search target (the "attentional template") that guides spatial attention toward target-like visual input. In real-world vision, however, objects produce vastly different visual input depending on their location: your car produces a retinal image that is 10 times smaller when it is parked 50 compared to 5 m away. Across four experiments, we investigated whether the attentional template incorporates viewing distance when observers search for familiar object categories. On each trial, participants were precued to search for a car or person in the near or far plane of an outdoor scene. In "search trials," the scene reappeared and participants had to indicate whether the search target was present or absent. In intermixed "catch-trials," two silhouettes were briefly presented on either side of fixation (matching the shape and/or predicted size of the search target), one of which was followed by a probe-stimulus. We found that participants were more accurate at reporting the location (Experiments 1 and 2) and orientation (Experiment 3) of probe stimuli when they were presented at the location of size-matching silhouettes. Thus, attentional templates incorporate the predicted size of an object based on the current viewing distance. This was only the case, however, when silhouettes also matched the shape of the search target (Experiment 2). We conclude that attentional templates for finding objects in scenes are shaped by a combination of category-specific attributes (shape) and context-dependent expectations about the likely appearance (size) of these objects at the current viewing location.
This item appears in the following Collection(s)
- Academic publications [243984]
- Electronic publications [130695]
- Faculty of Social Sciences [30023]
- Open Access publications [104970]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.