Wiesing, Michael ORCID: 0000-0001-8287-127X (2022). Object-based attentional expectancies in virtual reality. PhD thesis, Universität zu Köln.

[img]
Preview
PDF
Dissertation_MichaelWiesing.pdf - Accepted Version
Bereitstellung unter der CC-Lizenz: Creative Commons Attribution.

Download (15MB) | Preview

Abstract

Modern virtual reality (VR) technology has the promise to enable neuroscientists and psychologists to conduct ecologically valid experiments, while maintaining precise experimental control. However, in recent studies, game engines like Unreal Engine or Unity, are used for stimulus creation and data collection. Yet game engines do not provide the underlying architecture to measure the time of stimulus events and behavioral input with the accuracy or precision required by many experiments. Furthermore, it is currently not well understood, if VR and the underlying technology engages the same cognitive processes as a comparable real-world situation. Similarly, not much is known, if experimental findings obtained in a standard monitor-based experiment, are comparable to those obtained in VR by using a head-mounted display (HMD) or if the different stimulus devices also engage different cognitive processes. The aim of my thesis was to investigate if modern HMDs affect the early processing of basic visual features differently than a standard computer monitor. In the first project (chapter 1), I developed a new behavioral paradigm, to investigate how prediction errors of basic object features are processed. In a series of four experiments, the results consistently indicated that simultaneous prediction errors for unexpected colors and orientations are processed independently on an early level of processing, before object binding comes into play. My second project (chapter 2) examined the accuracy and precision of stimulus timing and reaction time measurements, when using Unreal Engine 4 (UE4) in combination with a modern HMD system. My results demonstrate that stimulus durations can be defined and controlled with high precision and accuracy. However, reaction time measurements turned out to be highly imprecise and inaccurate, when using UE4’s standard application programming interface (API). Instead, I proposed a new software-based approach to circumvent these limitations. Timings benchmarks confirmed that the method can measure reaction times with a precision and accuracy in the millisecond range. In the third project (chapter 3), I directly compared the task performance in the paradigm developed in chapter 1 between the original experimental setup and a virtual reality simulation of this experiment. To establish two identical experimental setups, I recreated the entire physical environment in which the experiments took place within VR and blended the virtual replica over the physical lab. As a result, the virtual environment (VE) corresponded not only visually with the physical laboratory but also provided accurate sensory properties of other modalities, such as haptic or acoustic feedback. The results showed a comparable task performance in both the non-VR and the VR experiments, suggesting that modern HMDs do not affect early processing of basic visual features differently than a typical computer monitor.

Item Type: Thesis (PhD thesis)
Creators:
CreatorsEmailORCIDORCID Put Code
Wiesing, Michaelmi.wiesing@gmail.comorcid.org/0000-0001-8287-127X121064503
URN: urn:nbn:de:hbz:38-635496
Date: 28 September 2022
Language: English
Faculty: Central Institutions / Interdisciplinary Research Centers
Divisions: Außeruniversitäre Forschungseinrichtungen > Forschungszentrum Jülich
Subjects: Data processing Computer science
Psychology
Natural sciences and mathematics
Life sciences
Uncontrolled Keywords:
KeywordsLanguage
Virtual RealityEnglish
Reaction TimesEnglish
Virtual EnvironmentsEnglish
Unreal EngineEnglish
Date of oral exam: 13 July 2022
Referee:
NameAcademic Title
Weidner, RalphPD Dr.
Vossel, SimoneProf. Dr.
Haider, HildeProf. Dr.
Refereed: Yes
URI: http://kups.ub.uni-koeln.de/id/eprint/63549

Downloads

Downloads per month over past year

Export

Actions (login required)

View Item View Item