Files
Abstract
This research investigated how the similarity and realism of the rendering parameters of background and foreground objects affected egocentric depth perception in indoor virtual and augmented environments. I refer to the similarity of the rendering parameters as visual ‘congruence’. Across three experiments, participants performed a perceptual matching task where they manipulated the depth of a sphere to match the depth of a designated target peg. Each experiment served to evaluate the influence of different levels of realism and congruence between environment objects on depth perception. In the first experiment, the sphere and peg were both virtual and unrealistic. The background would switch between virtual and real. In the second experiment, the sphere was virtual (unrealistic) and the peg was real. In the third experiment, the sphere was virtual (realistic), and the peg was either real or virtual (realistic). Realistically rendered objects used physically based rendering and global illumination from the real environment compared to traditional simplistic rendering. In all experiments, depth perception accuracy was found to depend on the levels of realism and congruence between the sphere, pegs, and background. My results demonstrate the use of PBR techniques and baked global illumination applied to virtual targets and manipulated objects resulted in a reduction in depth estimation errors compared to the non-PBR condition. Moreover, I found that the visual congruence between virtual and real elements plays an important role in depth judgments with real targets influencing accurate depth perception when interacting with PBR-manipulated objects. Interaction effects between target and manipulated object affect depth perception switching from overestimations to underestimations depending on rendering conditions. My findings contribute to the growing body of knowledge on AR and VR depth perception which can validate the efforts for developing advanced rendering techniques for augmented environments. Results can help determine the design and development choices for future AR/VR applications, particularly those requiring precise depth judgments and seamless real-world integrations of virtual objects.