A Neuromorphic Depth-From-Motion Vision Model With STDP Adaptation

2006 | journal article. A publication with affiliation to the University of Göttingen.

Jump to: Cite & Linked | Documents & Media | Details | Version history

Cite this publication

​A Neuromorphic Depth-From-Motion Vision Model With STDP Adaptation​
Yang, Z.; Murray, A.; Wörgötter, F. A. ; Cameron, K. & Boonsobhak, V.​ (2006) 
IEEE Transactions on Neural Networks17(2) pp. 482​-495​.​ DOI: https://doi.org/10.1109/tnn.2006.871711 

Documents & Media

License

GRO License GRO License

Details

Authors
Yang, Z.; Murray, A.; Wörgötter, F. A. ; Cameron, K.; Boonsobhak, V.
Abstract
We propose a simplified depth-from-motion vision model based on leaky integrate-and-fire (LIF) neurons for edge detection and two-dimensional depth recovery. In the model, every LIF neuron is able to detect the irradiance edges passing through its receptive field in an optical flow field, and respond to the detection by firing a spike when the neuron's firing criterion is satisfied. If a neuron fires a spike, the time-of-travel of the spike-associated edge is transferred as the prediction information to the next synapse-linked neuron to determine its state. Correlations between input spikes and their timing thus encode depth in the visual field. The adaptation of synapses mediated by spike-timing-dependent plasticity is used to improve the algorithm's robustness against inaccuracy caused by spurious edge propagation. The algorithm is characterized on both artificial and real image sequences. The implementation of the algorithm in analog very large scale integrated (aVLSI) circuitry is also discussed.
Issue Date
2006
Journal
IEEE Transactions on Neural Networks 
ISSN
1045-9227
Language
English

Reference

Citations


Social Media