A Bio-inspired Synergistic Virtual Retina Model for Tone Mapping
Marco Benzi, Maria-Jose Escobar, Adrien Bousseau, Pierre Kornprobst
CVIU 2017 : Special Issue on Vision and Computational Photography and Graphics
Real-world radiance values span several orders of magnitudes which have to be processed by artificial systems in order to capture visual scenes with a high visual sensitivity. Interestingly, it has been found that similar processing happen in biological systems, starting at the retina level. So our motivation in this paper is to develop a new video TMO based on a synergistic model of the retina. We start from the so-called Virtual Retina model, which has been developed in computational neuroscience. We show how to enrich this model with new features to use it as a TMO, such as color management, luminance adaptation at photoreceptor level and readout from a heterogeneous population activity. Our method works for video but can be applied to static images seen as a video of a static frame. It has been carefully evaluated on standard benchmarks in the static case, giving comparable results to the state-of-the-art using default parameters, while offering user control for finer tuning. Result on HDR video are also promising, specifically w.r.t. temporal luminance coherency. Code is available as a Python notebook and a C++ implementation through GitHub so that reader could test and experiment the approach step-by-step. As a whole, this paper shows a promising way to address computational photography challenges by exploiting the current research in neuroscience about retina processing.