Research Article: Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze

Date Published: June 14, 2018

Publisher: Public Library of Science

Author(s): Raul Rodriguez, Benjamin Thomas Crane, Stefan Glasauer.

http://doi.org/10.1371/journal.pone.0199097

Abstract

Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.

Partial Text

Determining how we are moving relative to the outside world requires processing of multiple sensory stimuli. For heading perception two salient sensory systems are visual and vestibular. Vision tells us how the world is moving relative to us using cues such as optic flow[1, 2], while the vestibular system is a more direct measure of self-motion. In addition to the vestibular system, proprioception likely plays a minor role in sensing inertial motion[3], but in this paper we will use the term inertial to include combined vestibular and proprioceptive cues. Heading can be determined from inertial motion alone[4–7]. However, when visual cues and inertial cues are available they are integrated to form a unified perception of heading direction[8–14].

Eye position had only a small effect on the bias during the purely inertial stimulus. With no vibration and left gaze the average point of subjective equality (PSE) was 3.6 ± 5.2° (mean ± SD) to the right and with right gaze the mean bias was 0.0 ± 3.9° (Fig 4A). Using a paired T-test this difference in gaze offset was not significant (p > 0.1) for the no vibration inertial motion. However small differences in bias were also present with gaze direction when vibration was added to the inertial stimulus. When all inertial conditions were considered across vibration amplitudes (Fig 4B–4C) the PSE was shifted 3.3 ± 7.7° (mean ± SD) to the right with left gaze and 2.9 ± 6.3° to the left with right gaze. These were significantly different (p = 0.006, paired T-test). Thus, a midline inertial stimulus would be more likely to be perceived in the direction of gaze. With a visual stimulus the bias with gaze shifts was larger than seen with the inertial stimulus and in the opposite direction (Fig 4E), such that the mean PSE was 11.8 ± 5.4° to the right with right gaze and 7.3 ± 6.0° to the left with left gaze. Thus, for the visual condition there was a significant difference in PSE based on gaze direction (p = 0.02, paired T-test). Because of this shift a midline visual heading would be likely to be perceived opposite the gaze direction. Thus, gaze shifts had an opposite effect on the direction of visual and inertial heading perception such that at lateral gaze positions the difference between the perceived direction of visual and inertial headings (when delivered separately) averaged 11.4°. This offset allowed the relative weights of visual and inertial headings to be determined during a multisensory stimulus presentation (empirical weight, Eq 3).

Several papers have now examined visual-inertial cue integration for heading estimation[9, 15, 16, 24]. On average all of these studies have found that the inertial (i.e. vestibular) cue is weighted higher than would be predicted based on its relative reliability. When the weights for individual subjects were reported there was significant variation in the relative weights of the visual and inertial stimuli in both monkeys[15] and humans[16]. In these individual weights it seemed that the higher weight of the inertial stimulus was due to behavior in one of three monkeys[15] and two of seven humans[16]. The reason the inertial stimulus is more heavily weighted than expected, has not been fully explained. In the monkey experiments it was thought that this may be due to animals being trained on the inertial cue first, but this would not explain why this trend was also present in humans[15, 16, 23, 31] who were not trained. The current experiments investigated the possibility that the higher inertial weighting was due to the inertial cue reliability being constant unlike the visual cue reliability which was varied between trials. The current results demonstrated that the inertial continued to be weighted more heavily even when the reliability was varied. Surprisingly, in subjects 3, 4, & 5 (Fig 7), with the current protocol, the inertial stimulus was consistently weighted greater than in the previous experiments where the visual stimulus reliability was varied. In some individuals (e.g. subject 4 in Fig 7) the empirical visual weight was negative (i.e. the inertial weight was greater than unity). This isn’t possible when using Bayesian predictions, and in both these subjects the inertial stimulus was weighted heavier than it was when the reliability of the visual stimulus was varied. In our analysis we assumed that subjects had no top-down expectations or biases also known as priors about the stimuli as others have[17, 31, 42–45] and we did in our previous study[16]. It is possible that subjects came into these experiments with priors that did not have a uniform influence on perception and a P(X) might be able to explain the current findings in terms of a Bayesian model. However, if such priors existed it is unclear what they were or how they could be described in the model. Thus, the best evidence was that multisensory integration was non-Bayesian (i.e. the weights of the visual and inertial cues were not based on their reliability) in this paradigm.

 

Source:

http://doi.org/10.1371/journal.pone.0199097

 

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments