Visual Search

Very little is currently known about how humans perform search tasks in immersive 3D environments with more numerous objects (~1000) than in traditional 2D visual search experiments (~50). For designing efficient VR user interfaces, it is important to thoroughly investigate perception in 3D. In this study, we designed the classic feature and conjunction search experiment in VR, modelling 3D virtual space using a spherical coordinate system. The target was presented in one of 32 equally-sized regions centered at the participant blocked with 45 degree increments in radial angle and elevation. The target was a red cube embedded in 96, 480, 768 or 1024 distractors that were equally distributed among the 32 regions. Distractors were either green cubes (feature search) or red spheres and green cubes (conjunction search). The task was to find the target as quickly as possible using eye, head, and / or body movements. We studied slopes of reaction times with respect to number of distractors for different regions of the sphere. Based on data from 25 participants, the typical pattern of slope of  feature and conjunction search was observed. More interestingly there was a distinct inter-region variation. Reaction times were smaller in the left hemisphere compared to the right, probably due to left-right bias typical of English readers. Participants prefered counterclockwise head / body rotations when searching for targets not immediately in their field of view. Interestingly, regions below eye-level were favoured more than those above. Though these findings seem robust, we find that occlusion (and its severity) can be a nuisance variable in such 3D search tasks.

Video

Posters

Poster presented at VSS 2018

Contact

Aman Shankar Mathur

Rupak Majumdar

Tandra Ghose

Imprint / Data Protection