IVE 2024 Short Course - Lecture 2 - Fundamentals of Perception

marknb00 200 views 127 slides Jul 21, 2024
Slide 1
Slide 1 of 127
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127

About This Presentation

Lecture 2 from the IVE 2024 Short Course on the Psychology of XR. This lecture covers some of the Fundamentals of Percetion and Psychology that relate to XR.

The lecture was given by Mark Billinghurst on July 15th 2024 at the University of South Australia.


Slide Content

Lecture 2: Fundamentals
The Psychology of XR
July 15
th – 19
th 2024

Virtual Reality (VR)
●Users immersed in Computer Generated environment
○HMD, gloves, 3D graphics, body tracking

The First VR Experience …
https://www.youtube.com/watch?v=pAC5SeNH8jw

Virtual Reality Definition
●Defining Characteristics
○Sense of Immersion
■User feels immersed in computer generated space
○Interactive in real-time
■The virtual content can be interacted with
○Independence
■User can have independent view and reaction to
environment

David Zeltzer’s AIP Cube
Autonomy – User can to react to events
and stimuli.
Interaction – User can interact with
objects and environment.
Presence – User feels immersed through
sensory input and output channels
Interaction
Autonomy
Presence
VR
Zeltzer, D. (1992). Autonomy, interaction, and presence. Presence: Teleoperators & Virtual Environments, 1(1),127-132.

Augmented Reality (AR)
•Virtual Images blended with the real world
•See-through HMD, handheld display, viewpoint tracking, etc..

Augmented Reality Definition
●Defining Characteristics [Azuma 97]
○Combines Real and Virtual Images
■Both can be seen at the same time
○Interactive in real-time
■The virtual content can be interacted with
○Registered in 3D
■Virtual objects appear fixed in space
Azuma, R. T. (1997). A survey of augmented reality. Presence, 6(4), 355-385.

Milgram’s Mixed Reality (MR) Continuum
Augmented Reality Virtual Reality
Real World Virtual World
Mixed Reality
"...anywhere between the extrema of the virtuality continuum."
P. Milgram and A. F. Kishino, (1994) A Taxonomy of Mixed Reality Visual Displays
Internet of Things

Apple Vision Pro (2024)
●Transitioning from AR to VR
●Spatial Computing – interface seamlessly blending with real world

https://www.youtube.com/watch?v=oP6CrLcMKO4

Extended Reality (XR)
Augmented Reality Virtual Reality
Real World Virtual World
Mixed Reality
Extended Reality
Internet of Things

Goal of Virtual Reality
“.. to make it feel like you’re actually
in a place that you are not.”
Palmer Luckey
Co-founder, Oculus

Creating a Good VR Experience
●Creating a good experience requires multisensory input
○Integrating multiple perceptual cues

Example: Shard VR Slide
●Ride down the Shard at 100 mph - Multi-sensory VR
https://www.youtube.com/watch?v=HNXYoEdBtoU

Creating Illusions
●Virtual Reality
○You’re immersed in a place
●Augmented Reality
○Virtual content is in your place
●Mixed Reality
○Seamless moving from RW in VR

Perception

What is Reality?

How do We Perceive Reality?
●We understand the world through our senses:
○Sight, Hearing, Touch, Taste, Smell (and others..)
●Two basic processes:
○Sensation – Gathering information
○Perception – Interpreting information

Motivation
●Understand: In order to create a strong sense of Presence
we need to understand the Human Perception system
●Stimulate: We need to be able to use technology to provide
real world sensory inputs, and create the VR illusion
VR Hardware Human Senses

Senses
●How an organism obtains information for perception:
○Sensation part of Somatic Division of Peripheral Nervous System
○Integration and perception requires the Central Nervous System
●Five major senses (but there are more..):
○Sight (Opthalamoception)
○Hearing (Audioception)
○Taste (Gustaoception)
○Smell (Olfacaoception)
○Touch (Tactioception)

Relative Importance of Each Sense
●Percentage of neurons in
brain devoted to each
sense
○Sight – 30%
○Touch – 8%
○Hearing – 2%
○Smell - < 1%
●Over 60% of brain
involved with vision in
some way

Other Lessor Known Senses..
●Proprioception = sense of body position
○what is your body doing right now
●Equilibrium = balance
●Acceleration
●Nociception = sense of pain
●Temperature
●Satiety = state of being fed or gratified to or beyond capacity
●Thirst
●Micturition = amount of CO2 and Na in blood

Sight

The Human Visual System
●Purpose is to convert visual input to signals in the brain

The Human Eye
●Light passes through cornea and lens onto retina
●Photoreceptors in retina convert light into electrochemical signals

Photoreceptors – Rods and Cones
●Retina photoreceptors come in two types, Rods and Cones
○Rods – 125 million, periphery of retina, no colour detection, night vision
○Cones – 4-6 million, center of retina, colour vision, day vision

Human Horizontal and Vertical FOV
●Humans can see ~135
o
vertical (60
o
above, 75
o
below)
●See up to ~ 210
o
horizontal FOV, ~ 115
o
stereo overlap
●Colour/stereo in centre, black and white/mono in periphery

Vergence + Accommodation
●saas

https://www.youtube.com/watch?v=p_xLO7yxgOk

Visual Acuity
Visual Acuity Test Targets
●Ability to resolve details
●Several types of visual acuity
○detection, separation, etc
●Normal eyesight can see a 50 cent coin at 80m
○Corresponds to 1 arc min (1/60
th of a degree)
○Max acuity = 0.4 arc min

Stereo Perception/Stereopsis
●Eyes separated by IPD
○Inter pupillary distance
○5 – 7.5cm (avge. 6.5cm)
●Each eye sees diff. image
○Separated by image parallax
●Images fused to create 3D stereo view

Depth Perception
●The visual system uses a range of different
Stereoscopic and Monocular cues for depth
perception
Stereoscopic Monocular
eye convergence angle
disparity between left
and right images
diplopia
eye accommodation
perspective
atmospheric artifacts (fog)
relative sizes
image blur
occlusion
motion parallax
shadows
texture
Parallax can be more important for depth perception!
Stereoscopy is important for size and distance evaluation

Common Depth Cues

Depth Perception Distances
●i.e. convergence/accommodation used for depth perception < 10m

Properties of the Human Visual System
●Visual acuity: 20/20 is ~1 arc min
●Field of view: ~200° monocular, ~120° binocular, ~135° vertical
●Resolution of eye: ~576 megapixels
●Temporal resolution: ~60 Hz (depends on contrast, luminance)
●Dynamic range: instantaneous 6.5 f-stops, adapt to 46.5 f-stops
●Colour: everything in CIE xy diagram
●Depth cues in 3D displays: vergence, focus, (dis)comfort
●Accommodation range: ~8cm to ∞, degrades with age

Creating the Perfect Illusion
Cuervo, E., Chintalapudi, K., & Kotaru, M. (2018, February). Creating
the perfect illusion: What will it take to create life-like virtual reality
headsets?. InProceedings of the 19th International Workshop on
Mobile Computing Systems & Applications(pp. 7-12).
●Technology to create life-like VR HMDs
●Compared to current HMDs
○2 − 10× higher pixel density
○20 − 30× higher frame rate

Comparison between Eyes and HMD

When Will We Achieve Life-like VR Displays?
●Could achieve visual fidelity by 2025
○BUT:
■GPUs not fast enough for high framerate (140 Tflops by 2025, need 10x for 1800Hz)
■Wireless life like VR requires 2.7Tbps, c.f. wireless HD standard of 25Gbps
Display FR and Pixels/degree

Hearing

Anatomy of the Ear

Auditory Thresholds
●Humans hear frequencies from 20 – 22,000 Hz
●Most everyday sounds from 80 – 90 dB

Sound Localization
●Humans have two ears
○localize sound in space
●Sound can be localized
using 3 coordinates
○Azimuth, elevation, distance

Sound Localization
https://www.youtube.com/watch?v=FIU1bNSlbxk

Sound Localization (Azimuth Cues)
Interaural Time Difference

HRTF (Elevation Cue)
●Pinna and head shape affect frequency intensities
●Sound intensities measured with microphones in ear
and compared to intensities at sound source
○Difference is HRTF, gives clue as to sound source location

Accuracy of Sound Localization
●People can locate sound
○Most accurately in front of them
■2-3° error in front of head
○Least accurately to sides and behind head
■Up to 20° error to side of head
■Largest errors occur above/below elevations and behind head
●Front/back confusion is an issue
○Up to 10% of sounds presented in the front are perceived
coming from behind and vice versa (more in headphones)
BUTEAN, A., Bălan, O., NEGOI, I., Moldoveanu, F., & Moldoveanu, A. (2015). COMPARATIVE RESEARCH ON SOUND
LOCALIZATION ACCURACY IN THE FREE-FIELD AND VIRTUAL AUDITORY DISPLAYS. InConferenceproceedings of»
eLearning and Software for Education «(eLSE)(No. 01, pp. 540-548). UniversitateaNationalade AparareCarol I.

Touch

Haptic Sensation
●Somatosensory System
○complex system of nerve cells that responds to
changes to the surface or internal state of the body
●Skin is the largest organ
○1.3-1.7 square m in adults
●Tactile: Surface properties
○Receptors not evenly spread
○Most densely populated area is the tongue
●Kinesthetic: Muscles, Tendons, etc.
○Also known as proprioception

Cutaneous System
●Skin – heaviest organ in the body
○Epidermis outer layer, dead skin cells
○Dermis inner layer, with four kinds of mechanoreceptors

Mechanoreceptors
●Cells that respond to pressure, stretching, and vibration
○Slow Acting (SA), Rapidly Acting (RA)
○Type I at surface – light discriminate touch
○Type II deep in dermis – heavy and continuous touch
Receptor TypeRate of
Acting
Stimulus
Frequency
Receptive FieldDetection Function
Merkel DiscsSA-I 0 – 10 Hz Small, well definedEdges, intensity
Ruffini
corpuscles
SA-II 0 – 10 Hz Large, indistinctStatic force,
skin stretch
Meissner
corpuscles
RA-I 20 – 50 Hz Small, well definedVelocity, edges
Pacinian
corpuscles
RA-II 100 – 300 HzLarge, indistinctAcceleration,
vibration

Spatial Resolution
●Sensitivity varies greatly
○Two-point discrimination
Body
Site
Threshold
Distance
Finger 2-3mm
Cheek 6mm
Nose 7mm
Palm 10mm
Forehead 15mm
Foot 20mm
Belly 30mm
Forearm 35mm
Upper Arm 39mm
Back 39mm
Shoulder 41mm
Thigh 42mm
Calf 45mm
http://faculty.washington.edu/chudler/chsense.html

Proprioception/Kinaesthesia
●Proprioception (joint position sense)
○Awareness of movement and positions of body parts
■Due to nerve endings and Pacinian and Ruffini corpuscles at joints
○Enables us to touch nose with eyes closed
○Joints closer to body more accurately sensed
○Users know hand position accurate to 8cm without looking at them
●Kinaesthesia (joint movement sense)
○Sensing muscle contraction or stretching
■Cutaneous mechanoreceptors measuring skin stretching
○Helps with force sensation

Information Processing

Simple Perception Action Model
Wickens, C. D., & Carswell, C. M. (2021). Information processing.Handbook of human factors and ergonomics, 114-158.
Open Loop
Closed Loop

Simple Sensing/Perception Model

Human Information Processing Model
Wickens, C. D. (1992), Engineering Psychology and HumanPerformance, 2nd ed., HarperCollins, New York.

Creating the Illusion of Reality
●Fooling human perception by using
technology to generate artificial sensations
○Computer generated sights, sounds, smell, etc

Reality vs. Virtual Reality
●In a VR system there are input and output devices
between human perception and action

Using Technology to Stimulate Senses
●Simulate output
○E.g. simulate real scene
●Map output to devices
○Graphics to HMD
●Use devices to
stimulate the senses
○HMD stimulates eyes
Visual
Simulation
3D GraphicsHMD Vision
System
Brain
Example: Visual Simulation
Human-Machine Interface

Example Birdly - http://www.somniacs.co/
●Create illusion of flying like a bird
●Multisensory VR experience
○Visual, audio, wind, haptic

https://www.youtube.com/watch?v=gHE6H62GHoM

HMD Basic Principles
●Use display with optics to create illusion of virtual screen

Simple Magnifier HMD Design
p
q
Eyepiece
(one or more lenses)Display
(Image Source)
Eye f
Virtual
Image
1/p + 1/q = 1/f where
p = object distance (distance from image source to eyepiece)
q = image distance (distance of image from the lens)
f = focal length of the lens

Vergence-Accommodation Conflict
●Looking at real objects, vergence and focal distance match
●In VR, vergence and accommodation can miss-match
○Focusing on HMD screen, but accommodating for virtual object behind screen

AR Vergence and Accommodation
●Fixed focal distance for OST displays
●Accommodation conflict between real and virtual object

AR – Focal Rivalry
●Optical see-through AR displays with fixed focal length
○E.g. Hololens focal length ~2m
●When real objects < focal length, can’t keep virtual object in focus
○Either real or virtual become blurry

Example
●People made errors twice as large on connect the dots task in AR vs. real world
○Connect virtual numbers either – without AR 0.9 mm average error, using AR 2.3 mm error
Focus on Ruler Focus on Virtual Image
Condino, S., Carbone, M., Piazza, R., Ferrari, M., & Ferrari, V. (2019). Perceptual limits of optical see-through visors for
augmented reality guidance of manual tasks.IEEE Transactions on Biomedical Engineering,67(2), 411-419.

Using Multiple Image Planes

MagicLeap Display
●Optical see through AR display
○Overlay graphics directly on real world
○40
o x 30
o FOV, 1280 x 960 pixels/eye
●Waveguide based display
○Holographic optical element
○Very thin physical display
●Two sets of waveguides
○Different focal planes
■Overcomes vergence/accommodation problem
○Eye tracking for selecting focal plane
●Separate CPU/GPU unit

Distortion in Lens Optics
A rectangle Maps to this

HTC Vive Optics

To Correct for Distortion
●Must pre-distort image
●This is a pixel-based
distortion
●Use shader
programming

VR Distorted Image

Interpupillary Distance (IPD)
nHorizontal distance between
a user's eyes
nDistance between the two
optical axes in a HMD
nTypical IPD ~ 63mm

Field of View
Monocular FOV is the angular
subtense of the displayed image as
measured from the pupil of one eye.
Total FOV is the total angular size of the
displayed image visible to both eyes.
Binocular(or stereoscopic) FOV refers to the
part of the displayed image visible to both eyes.
FOV may be measured horizontally,
vertically or diagonally.

Typical VR HMD FOV

Foveated Displays
●Combine high resolution center
with low resolution periphery

Varjo Display
Varjo resolution
Non-Varjo resolution
Focus area (27°x 27°)
70 PPD, 1920 x 1920px
115°FOV
30 PPD
2880 x 2720px
●1 LCD (wide FOV)
●1 uOLEDpanel (centre)

Varjo XR-3 Demo – Threading a Needle
https://www.youtube.com/watch?v=5iEwlOEUQjI

Perception Based Graphics
●Eye Physiology
○Rods in eye centre = colour vision, cones in periphery = motion, B+W
●Foveated Rendering
○Use eye tracking to draw highest resolution where user looking
○Reduces graphics throughput

Foveated Rendering
●https://www.youtube.com/watch?v=lNX0wCdD2LA

Typical VR Simulation Loop
●User moves head, scene updates, displayed graphics change

●Need to synchronize system to reduce delays
System Delays

Typical System Delays
●Total Delay = 50 + 2 + 33 + 17 = 102 ms
○1 ms delay = 1/3 mm error for object drawn at arms length
○So total of 33mm error from when user begins moving to when object drawn
TrackingCalculate
Viewpoint
Simulation
Render
Scene
Draw to
Display
x,y,z
r,p,y
Application Loop
20 Hz = 50ms500 Hz = 2ms30 Hz = 3ms60 Hz = 17ms

Effects of System Latency
●Degraded Visual Acuity
○Scene still moving when head stops = motion blur
●Degraded Performance
○As latency increases it’s difficult to select objects etc.
○If latency > 120 ms, training doesn’t improve performance
●Breaks-in-Presence
○If system delay high user doesn’t believe they are in VR
●Negative Training Effects
○User train to operative in world with delay
●Simulator Sickness
○Latency is greatest cause of simulator sickness

Simulator Sickness
●Visual input conflicting with vestibular system

What Happens When Senses Don’t Match?
●20-30% VR users experience motion sickness
●Sensory Conflict Theory
○Visual cues don’t match vestibular cues
■Eyes – “I’m moving!”, Vestibular – “No, you’re not!”

Avoiding Motion Sickness
●Better VR experience design
○More natural movements
●Improved VR system performance
○Less tracking latency, better graphics frame rate
●Provide a fixed frame of reference
○Ground plane, vehicle window
●Add a virtual nose
○Provide peripheral cue
●Eat ginger
○Reduces upset stomach

Many Causes of Simulator Sickness
●25-40% of VR users get Simulator Sickness, due to:
●Latency
○Major cause of simulator sickness
●Tracking accuracy/precision
○Seeing world from incorrect position, viewpoint drift
●Field of View
○Wide field of view creates more periphery vection = sickness
●Refresh Rate/Flicker
○Flicker/low refresh rate creates eye fatigue
●Vergence/Accommodation Conflict
○Creates eye strain over time
●Eye separation
○If IPD not matching to inter-image distance then discomfort

Motion Sickness
●https://www.youtube.com/watch?v=BznbIlW8iqE

Applying HIP to XR Design

Human Information Processing Model
●High level staged model from Wickens and Carswell (1997)
○Relates perception, cognition, and physical ergonomics
Perception Cognition Ergonomics

Design for Perception
●Need to understand perception to design AR/VR
●Visual perception
○Many types of visual cues (stereo, oculomotor, etc.)
●Auditory system
○Binaural cues, vestibular cues
●Somatosensory
○Haptic, tactile, kinesthetic, proprioceptive cues
●Chemical Sensing System
○Taste and smell

Depth Perception Problems
●Without proper depth cues AR interfaces look unreal

Which of these POI are near or
far?

Types of Depth Cues

Improving Depth Perception
Cutaways
Occlusion
Shadows

Cutaway Example
●Providing depth perception cues for AR
https://www.youtube.com/watch?v=2mXRO48w_E4

Design for Cognition
●Design for Working and Long-term memory
○Working memory
■Short term storage, Limited storage (~5-9 items)
○Long term memory
■Memory recall trigger by associative cues
●Situational Awareness
○Model of current state of user’s environment
■Used for wayfinding, object interaction, spatial awareness, etc..
○P
■Landmarks, procedural cues, map knowledge
■Support both ego-centric and exo-centric views

Micro-Interactions
▪Using mobile phones people split their attention
between the display and the real world

Time Looking at Screen
Oulasvirta, A. (2005). The fragmentation of attention in mobile
interaction, and what to do with it.interactions,12(6), 16-18.

Dividing Attention to World
●Number of times looking away from mobile screen

Design for Micro Interactions
▪Design interaction for less than a few seconds
○Tiny bursts of interaction
○One task per interaction
○One input per interaction
▪Benefits
○Use limited input
○Minimize interruptions
○Reduce attention fragmentation

NHTSA Guidelines - www.nhtsa.gov
For technology in cars:
•Any task by a driver should be interruptible at any time.
•The driver should control the pace of task interactions.
•Tasks should be completed with glances away from road <2 seconds
•Cumulative time glancing away from the road <=12 secs.

Make it Glanceable
●Seek to rigorously reduce information density. Successful designs afford for
recognition, not reading.
Bad Good

Reduce Information Chunks
You are designing for recognition, not reading. Reducing the total # of information chunks will greatly
increase the glanceability of your design.
1
2
3
1
2
3
4
5
(6)
Eye movements
For 1: 1-2 460ms
For 2: 1 230ms
For 3: 1 230ms
~920ms
Eye movements
For 1: 1230ms
For 2: 1230ms
For 3: 1230ms
For 4: 3690ms
For 5: 2460ms
~1,840ms

Ego-centric and Exo-centric views
●Combining ego-centric and exo-centric cue for better situational awareness

Cognitive Issues in Mobile AR
●Information Presentation
○Amount, Representation, Placement, View combination
●Physical Interaction
○Navigation, Direct manipulation, Content creation
●Shared Experience
○Social context, Bodily Configuration, Artifact manipulation, Display space
Li, N., & Duh, H. B. L. (2013). Cognitive issues in mobile augmented reality: an embodied perspective.
InHuman factors in augmented reality environments(pp. 109-135). Springer, New York, NY.

Information Presentation
•Consider
•The amount of information
•Clutter, complexity
•The representation of information
•Navigation cues, POI representation
•The placement of information
•Head, body, world stabilized
•Using view combinations
•Multiple views

Example: Twitter 360
●iPhone application
●See geo-located tweets in real world
●Twitter.com supports geo tagging

But: Information Clutter from Many Tweets
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah
Blah

Solution: Information Filtering

Information Filtering
Before After

Outdoor AR: Limited FOV

●Show POI outside FOV
●Zooms between map and panorama views
Zooming Views

https://www.youtube.com/watch?v=JLxLH9Cya20

Design for Physical Ergonomics
●Design for the human motion range
○Consider human comfort and natural posture
●Design for hand input
○Coarse and fine scale motions, gripping and grasping
○Avoid “Gorilla arm syndrome” from holding arm pose

Gorilla Arm in AR
●Design interface to reduce mid-air gestures

XRgonomics
●Uses physiological model to calculate ergonomic interaction cost
○Difficulty of reaching points around the user
○Customizable for different users
○Programmable API, Hololens demonstrator
●GitHub Repository
○https://github.com/joaobelo92/xrgonomics
Evangelista Belo, J. M., Feit, A. M., Feuchtner, T., & Grønbæk, K. (2021, May). XRgonomics: Facilitating the Creation of
Ergonomic 3D Interfaces. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(pp. 1-11).

XRgonomics
https://www.youtube.com/watch?v=cQW9jfVXf4g

System Design Guidelines - I
●Hardware
○Choose HMDs with fast pixel response time, no flicker
○Choose trackers with high update rates, accurate, no drift
○Choose HMDs that are lightweight, comfortable to wear
○Use hand controllers with no line-of-sight requirements
●System Calibration
○Have virtual FOV match actual FOV of HMD
○Measure and set users IPD
●Latency Reduction
○Minimize overall end to end system delay
○Use displays with fast response time and low persistence
○Use latency compensation to reduce perceived latency
Jason Jerald, The VR Book, 2016

System Design Guidelines - II
●General Design
○Design for short user experiences
○Minimize visual stimuli closer to eye (vergence/accommodation)
○For binocular displays, do not use 2D overlays/HUDs
○Design for sitting, or provide physical barriers
○Show virtual warning when user reaches end of tracking area
●Motion Design
○Move virtual viewpoint with actual motion of the user
○If latency high, no tasks requiring fast head motion
●Interface Design
○Design input/interaction for user’s hands at their sides
○Design interactions to be non-repetitive to reduce strain injuries
Jason Jerald, The VR Book, 2016

Questions?
[email protected]