Our goal is to develop a neurophysiologically inspired algorithm for improved electrical stimulation protocols in patients implanted with electronic prostheses. By 2020 roughly 200 million people will suffer from retinal diseases. Electronic prostheses, that stimulate remaining retinal cells using electrical current, analogous to a cochlear implant, are currently being implanted in patients and show promise in restoring some vision.
These prostheses require a way to translate the visual input into an electrical stimulation protocol, and the current methods of translation are known to be inadequate. Our goal is to develop better coding schemes and see whether they have the potential to improve the vision produced by these devices.
A major challenge with these prostheses is developing electrical stimulation protocols that properly convey a visual percept. Previous work involved the development of a ‘forward’ model that predicts a perceived image given a set of electrical stimuli; however, a ‘reverse’ model that predicts the appropriate sequence of electrical stimuli given a desired percept is required for informing improvements of visual prostheses.
Data and insights from this modeling may also be useful to regulators. As a part of the eScience Incubator, the ‘forward’ model of a retina with implanted electrodes has been implemented in Python. Work on speeding up convolution steps (challenging due to high sampling rates) will facilitate development and implementation of the ‘reverse’ model.