Peeking Behind Objects: Layered Depth Prediction from a Single Image

Here’s a nice approach to depth estimation which combines in-painting to allow simulation of slight camera moves from just a single image. While conventional depth estimation can infer the geometry of a scene from a single RGB image, it fails to estimate scene regions that are occluded by foreground objects. This limits the use of…Read More

Deep Generative Modeling for Scene Synthesis via Hybrid Representations

A recent post demonstrated a data-driven system for indoor scene synthesis. This work is motivated by similar but is noteworthy not only for its results – which allows for repeated inclusion of a given type of object and interpolation of entire scenes – but also the rigorous analysis of their approach which is bound to…Read More

Human Motion Modeling using DVGANs

We present a novel generative model for human motion modeling using Generative Adversarial Networks (GANs). We formulate the GAN discriminator using dense validation at each time-scale and perturb the discriminator input to make it translation invariant. Our model is capable of motion generation and completion. We show through our evaluations the resiliency to noise, generalization…Read More

Learning Category-Specific Mesh Reconstruction from Image Collections

Some truly remarkable results given the dataset from which this model was trained. Code on github will be available in the future. We present a learning framework for recovering the 3D shape, camera, and texture of an object from a single image. The shape is represented as a deformable 3D mesh model of an object…Read More

tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow

Generative Adversarial Networks (GANs) have been on a tear these last few months, providing rapid advances particularly in the field of realistic image generation. This application to fluids is interesting not only because it extends the architecture into 3D but also for what it allows. It’s long been desired to use low resolution proxy layout…Read More

DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks

Whilst the final image quality might not be quite yet there, there is surely more to come from this extremely promising area of research in the future. We present an end-to-end learning approach for motion deblurring, which is based on conditional GAN and content loss. It improves the state-of-the art in terms of peak signal-to-noise…Read More

Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Network

Fast approximations of high-quality renders could lead to interactive workflows that introduce additional opportunities to show clients in-progress shots which are more representative of final work. Here is some interesting research in that area: “We present Deep Illumination, a novel machine learning technique for approximating global illumination (GI) in real-time applications using a Conditional Generative…Read More