From Faces to Outdoor Light Probes

Image-based lighting has allowed the creation of photo-realistic computer-generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography enthusiast. This paper presents an approach to directly estimate an HDR light probe from a single LDR photograph, shot outdoors with a consumer…Read More

DeepWarp: DNN-based Nonlinear Deformation

DeepWarp is an efficient and highly re-usable deep neural network (DNN) based nonlinear deformable simulation framework. Unlike other deep learning applications such as image recognition, where different inputs have a uniform and consistent format (e.g. an array of all the pixels in an image), the input for deformable simulation is quite variable, high-dimensional, and parametrization-unfriendly….Read More

Unsupervised Geometry-Aware Representation for 3D Human Pose Estimation

A great example of how domain-specific knowledge can help design network architecture, in this case helping them make the jump from supervised learning (where training data may be difficult or time-consuming to acquire) to unsupervised learning (where training data is often plentiful). Modern 3D human pose estimation techniques rely on deep networks, which require large…Read More

Deep Unsupervised Intrinsic Image Decomposition by Siamese Training

Intrinsic image decomposition means splitting the observed color of a scene into its underlying components, such as illumination and reflectance. Once this process has been performed the layers can be manipulated independently before being recomposed to recreate a modified scene. What’s particularly interesting about this work is that it uses unsupervised training, which by definition…Read More

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation

Although this technique to estimate camera motion and decompose the scene into rigid/dynamic motion (a potential aid to segmentation) relies on a depth channel in the input images, it may not be long until new approaches are developed which can operate on RGB only. Estimation of 3D motion in a dynamic scene from a temporal…Read More

Texture Networks: Feed-forward Synthesis of Textures and Stylized Images

Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains…Read More

Accelerating Eulerian Fluid Simulation With Convolutional Networks

When asking those in the VFX industry which processes are the slowest and most compute-intensive fluid simulation is bound to be somewhere towards the top of the list. Physical phenomena such as fire and water are notoriously difficult to control, even in the hands of the most experienced artists, and usually require a significant number…Read More

Learning to Segment Every Thing

Producing accurate pixel-level masks around specific objects within images is of course a common task in VFX. Current solutions can be labor-intensive, and the results from one task cannot be used directly to improve the quality of future work. Existing tools generally do not know the semantic context of the object whose mask is being…Read More

Live Intrinsic Material Estimation

Whilst augmented reality may have motivated this research there’s clear applicability of this technique to visual effects, and further research in this area is sure to pave the way to exciting new tools. We present the first end-to-end approach for real-time material estimation for general object shapes that only requires a single color image as…Read More

DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks

Whilst the final image quality might not be quite yet there, there is surely more to come from this extremely promising area of research in the future. We present an end-to-end learning approach for motion deblurring, which is based on conditional GAN and content loss. It improves the state-of-the art in terms of peak signal-to-noise…Read More