Here’s an early exploration into using neural networks for guiding Monte Carlo integration, from Disney Research. Interestingly, they note how their learned models can be adapted to slightly modified scenes (e.g. changes in camera) and that might make it quite applicable to optimizing renders of animations. We propose to use deep neural networks for generating…Read More
Tag: lighting
Joint Learning of Intrinsic Images and Semantic Segmentation
We’ve previously covered both semantic segmentation and intrinsic image decomposition. Here we see a novel proposal to combine the two tasks under the premise that knowledge of one can assist the other. Models and datasets are coming soon. Semantic segmentation of outdoor scenes is problematic when there are variations in imaging conditions. It is known…Read More
Direction-aware Spatial Context Features for Shadow Detection and Removal
Shadow detection and shadow removal are fundamental and challenging tasks, requiring an understanding of the global image semantics. This paper presents a novel deep neural network design for shadow detection and removal by analyzing the image context in a direction-aware manner. To achieve this, we first formulate the direction-aware attention mechanism in a spatial recurrent…Read More
From Faces to Outdoor Light Probes
Image-based lighting has allowed the creation of photo-realistic computer-generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography enthusiast. This paper presents an approach to directly estimate an HDR light probe from a single LDR photograph, shot outdoors with a consumer…Read More
Deep Unsupervised Intrinsic Image Decomposition by Siamese Training
Intrinsic image decomposition means splitting the observed color of a scene into its underlying components, such as illumination and reflectance. Once this process has been performed the layers can be manipulated independently before being recomposed to recreate a modified scene. What’s particularly interesting about this work is that it uses unsupervised training, which by definition…Read More
Live Intrinsic Material Estimation
Whilst augmented reality may have motivated this research there’s clear applicability of this technique to visual effects, and further research in this area is sure to pave the way to exciting new tools. We present the first end-to-end approach for real-time material estimation for general object shapes that only requires a single color image as…Read More
HDR image reconstruction from a single exposure using deep CNNs
After creating LDR images by applying simulated camera sensor saturation to real HDR photos, the authors trained a model which could perform the inverse LDR->HDR operation and also generalize to previously unseen images. What’s more, they have released their dataset which can be downloaded from their project page. Camera sensors can only capture a limited…Read More
Learning to Predict Indoor Illumination from a Single Image
Here are some superb results with clear implications for lighting workflows. What’s more, The authors intend to release their data set – some 1750 high resolution HDR environment maps – soon. “We propose an automatic method to infer high dynamic range illumina- tion from a single, limited eld-of-view, low dynamic range photograph of an indoor…Read More
Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Network
Fast approximations of high-quality renders could lead to interactive workflows that introduce additional opportunities to show clients in-progress shots which are more representative of final work. Here is some interesting research in that area: “We present Deep Illumination, a novel machine learning technique for approximating global illumination (GI) in real-time applications using a Conditional Generative…Read More
Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks
Some great results from Disney, to be presented at this year’s SIGGRAPH Asia: “We present a technique for efficiently synthesizing images of atmospheric clouds using a combination of Monte Carlo integration and neural networks. The intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming aerosols make rendering of clouds—e.g. the characteristic silverlining and the…Read More