Friday, June 14, 2013

[Comp-neuro] open-source for massively parallel neural encoding/decoding of visual stimuli/scenes

Source code for encoding and decoding of natural and synthetic  visual scenes (videos) with
Time Encoding Machines consisting of Gabor or center surround  receptive fields in cascade
with Integrate-and-Fire neurons is available at  http://www.bionet.ee.columbia.edu/code/vtem
The code is written in Python/PyCuda and runs on single GPUs.

The current release supports grayscale videos. Stay  tuned for color and multi-GPUs implementations.

A visual demonstration of decoding a short video stimulus encoded  with a Video Time Encoding Machine
consisting of 100,000 Hodgkin-Huxley neurons is available at:  http://www.bionet.ee.columbia.edu/research/nce

Aurel

No comments:

Post a Comment