Learning to Predict Indoor Illumination from a Single Image

Here are some superb results with clear implications for lighting workflows. What’s more, The authors intend to release their data set – some 1750 high resolution HDR environment maps – soon.

“We propose an automatic method to infer high dynamic range illumina- tion from a single, limited eld-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited eld-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classi er to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited eld-of-view photo, and 3) we ne-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high- quality HDR illumination estimates that signi cantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, we can achieve results that are photo-realistic, which is validated via a perceptual user study.”


Are you aware of some research that warrants coverage here? Contact us or let us know in the comments section below!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.