Research Article: De-warping of images and improved eye tracking for the scanning laser ophthalmoscope

Date Published: April 3, 2017

Publisher: Public Library of Science

Author(s): Phillip Bedggood, Andrew Metha, Marinko Sarunic.

http://doi.org/10.1371/journal.pone.0174617

Abstract

A limitation of scanning laser ophthalmoscopy (SLO) is that eye movements during the capture of each frame distort the retinal image. Various sophisticated strategies have been devised to ensure that each acquired frame can be mapped quickly and accurately onto a chosen reference frame, but such methods are blind to distortions in the reference frame itself. Here we explore a method to address this limitation in software, and demonstrate its accuracy. We used high-speed (200 fps), high-resolution (~1 μm), flood-based imaging of the human retina with adaptive optics to obtain “ground truth” information on the retinal image and motion of the eye. This information was used to simulate SLO video sequences at 20 fps, allowing us to compare various methods for eye-motion recovery and subsequent minimization of intra-frame distortion. We show that a) a single frame can be near-perfectly recovered with perfect knowledge of intra-frame eye motion; b) eye motion at a given time point within a frame can be accurately recovered by tracking the same strip of tissue across many frames, due to the stochastic symmetry of fixational eye movements. This approach is similar to, and easily adapted from, previously suggested strip-registration approaches; c) quality of frame recovery decreases with amplitude of eye movements, however, the proposed method is affected less by this than other state-of-the-art methods and so offers even greater advantages when fixation is poor. The new method could easily be integrated into existing image processing software, and we provide an example implementation written in Matlab.

Partial Text

In image modalities that make use of point-focused raster scanning, each pixel in the reconstructed image is acquired at a different time. If the object of interest is in motion, this introduces distortion that cannot be removed post hoc unless a robust estimate of the object motion is available. In the case of retinal imaging especially, the constant and unavoidable motion of the fixating eye (tremors, slow drifts and microsaccades [1, 2]) compromises image fidelity. For current state-of-the-art methods such as scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT), this issue imposes limitations on the ability to quantify fine differences in tissue structure [3], or track / target particular retinal features [4].

We acquired video sequences with high spatiotemporal resolution (200 fps, 1 μm) using a flood-illumination (non-raster) AO ophthalmoscope. Because each frame comprises pixels illuminated and captured simultaneously, distortions within frames are minimal. This “ground truth” data was used to simulate the acquisition of distorted AOSLO video sequences. We then trialed the various approaches outlined above to de-warp these images, and compared the output to our ground truth data.

An inherent assumption of the methods explored here is that knowledge of eye movements during acquisition of each SLO frame can be used to completely recover that frame. To demonstrate the validity of this, we simulated an ideal case in which the ground truth eye movement data was already known, and each of the 100 simulated frames recovered. The RMS similarity metric in this case was >99.99%.

Fixational eye movements made during rasterized acquisition of retinal imagery can be recovered with high precision by image registration to many independently acquired frames. This information in turn allows accurate correction of intra-frame distortions that result from said eye movements. Such an approach has broad appeal since it can be implemented in existing devices, or be used to retroactively analyze existing data, without any change in hardware. Initial description of the approach by others did not receive widespread attention; we have revisited the general idea, showing that it approaches ideal performance in simulations. Our simulations did not take into account changes in retinal physiology, errors in de-sinusoiding, etc; therefore, confirmation of the benefits by comparison of actual raster scanned and flood image data obtained from the same eyes [3] is suggested.

 

Source:

http://doi.org/10.1371/journal.pone.0174617

 

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments