SIGGRAPH 2018

SIGGRAPH 2018 is just around the corner. Starting on Sunday August 12th in Vancouver, British Columbia and ending Thursday August 16th it’s looking like this is going to be a bumper year for presentations which cover applications of deep learning and demonstrate some truly stunning results.

The full schedule is here, yet you can see from our selected examples below that it will be possible to spend the entire week attending only sessions relating to machine learning. Will you be at the conference? Are you presenting and would like to shamelessly promote your talk? Did we miss something that really should  be on this list? Are you reading this after the conference and want to tell us what you enjoyed the most? Say hello in the comments, below!

Sunday, August 12th

I CAN SEE CLEARLY NOW (TALKS), 9AM – 10:30AM, West Building, Room 109-110, Vancouver Convention Centre

DEEPFOCUS: LEARNED IMAGE SYNTHESIS FOR COMPUTATIONAL DISPLAYS: This work introduces DeepFocus, a generic, end-to-end trainable convolutional neural network designed to efficiently solve the full range of computational tasks for accommodation-supporting HMDs. The method is demonstrated to synthesize physically plausible defocus blur, multilayer decompositions, and multiview imagery in real time using commonly available RGB-D images.

DEEP LEARNING: A CRASH COURSE (COURSE), 2PM – 5:15PM, EAST BUILDING, BALLROOM BC, VANCOUVER CONVENTION CENTRE

Deep learning is a revolutionary technique for discovering patterns from data. We’ll see how this technology works and what it offers us for computer graphics. There won’t be any math. Attendees will learn how to use these tools to power their own creative and practical investigations and applications.

IT’S A MATERIAL WORLD (TALKS), 2PM – 3:30PM, WEST BUILDING, ROOM 109-110, VANCOUVER CONVENTION CENTRE

MULTIPLE SCATTERING IN PARTICIPATING MEDIA USING NEURAL NETWORKS: We have presented a neural network model to represent multiple scattering events in participating media. The model provides a very compact representation for precomputed multiple scattering, and can be combined with many existing rendering algorithms, providing similar results for a fraction of the memory cost.

IEEE TVCG SESSION ON ADVANCES IN DATA VISUALIZATION (TALKS), 3:45PM – 5:15PM, West Building, Room 118-120, Vancouver Convention Centre

ACTIVIS: VISUAL EXPLORATION OF INDUSTRY-SCALE DEEP NEURAL NETWORK MODELS: While deep learning has led to major breakthroughs in various domains, understanding these models remains a challenge. Through participatory design sessions with researchers and engineers at Facebook, we design and develop ActiVis, a visual analytics system for deep neural network models. ActiVis has been deployed on Facebook’s machine learning platform.

Monday, August 13th

THE FUTURE’S WAITING (SIGGRAPH NEXT), 8AM – 8:45AM WEST BUILDING, ROOM 118-120, VANCOUVER CONVENTION CENTRE

We know that change generally takes five to 10 years at best to become realized within society. With that being true, predictors were in place five years ago that could have given us insight into what our world might look like today. This talk discusses current trends in place today that might tell us what the future could look like in five to 10 years. The future is waiting.

COMPUTATIONAL PHOTOGRAPHY (TECHNICAL PAPERS), 10:45AM – 12:35PM, WEST BUILDING, ROOM 301-305, VANCOUVER CONVENTION CENTRE

DEEP EXEMPLAR-BASED COLORIZATION: This paper proposes the first deep learning approach for exemplar-based colorization, in which a convolutional neural network robustly maps a grayscale image to a colorized output given a color reference.

DEEP CONTEXT-AWARE DESCREENING AND RESCREENING OF HALFTONE IMAGES: We present two-stage deep neural networks for descreening and rescreening of halftone images, in which we can remove halftone artifacts from printed photos and also can reproduce existing halftone patterns.

NON-STATIONARY TEXTURE SYNTHESIS BY ADVERSARIAL EXPANSION: This paper proposes a new GAN-based approach for example-based non-stationary texture synthesis. It can cope with challenging textures, which, to our knowledge, no other existing method can handle.

VIRTUALLY HUMAN (TECHNICAL PAPERS), 2PM – 3:30PM, WEST BUILDING, BALLROOM C, VANCOUVER CONVENTION CENTRE

DEEP LEARNING OF BIOMIMETIC SENSORIMOTOR CONTROL FOR BIOMECHANICAL HUMAN ANIMATION: This biomimetic human sensorimotor control features a biomechanically-simulated musculoskeletal model with human-like eyes and perception, which is actuated by numerous muscles activated by neuromuscular controllers employing automatically-trained deep neural networks.

Tuesday, August 14th

CONNECTIONS: THE INTERSECTION OF GRAPHICS AND MEDICINE (SIGGRAPH NEXT), 8AM – 8:45AM, WEST BUILDING, ROOM 118-120, VANCOUVER CONVENTION CENTRE

As CG reaches a cusp where we can mimic visual reality, we are challenged to use it for solving complex analytical problems in the world around us. Intersecting deep learning and artificial intelligence with advanced graphics provides groundbreaking new approaches. Specifically in the field of Biomed, this session discusses examples ranging from computer vision in microscopy to computer learning to recognize cancer cell anomalies in a pathology dashboard of the future.

COMPUTATIONAL PHOTOS AND VIDEOS (TECHNICAL PAPERS), 9AM – 10:30AM, WEST BUILDING, ROOM 211-214, VANCOUVER CONVENTION CENTRE

SYNTHETIC DEPTH-OF-FIELD WITH A SINGLE-CAMERA MOBILE PHONE: We describe a method for synthetically creating a shallow depth-of-field image on a mobile phone. Our system is the basis for “Portrait Mode” on the Google Pixel 2 smartphones.

STEREO MAGNIFICATION: LEARNING VIEW SYNTHESIS USING MULTIPLANE IMAGES: We address the problem of synthesizing new views from stereo images. We propose a new representation – “multiplane image” – and learn such representations using large amounts of training video.

INTERACTION/VR (TECHNICAL PAPERS), 9AM – 10:30AM, WEST BUILDING, ROOM 109-110, VANCOUVER CONVENTION CENTRE

FACEVR: REAL-TIME GAZE-AWARE FACIAL REENACTMENT IN VIRTUAL REALITY: FaceVR virtually removes VR-goggles using facial reenactment, which is of paramount importance in a VR-teleconferencing scenario. To this end, we transfer expressions as well as eye motions to a stereo target video.

DEEP APPEARANCE MODELS FOR FACE RENDERING: We introduce Deep Appearance Models for capturing human facial appearance with a multi-view camera system, real-time realistic rendering in VR, and performance-driven animation from HMD-mounted cameras.

IMAGE & SHAPE ANALYSIS WITH CNNS (TECHNICAL PAPERS), 10:45AM – 12:35PM, WEST BUILDING, BALLROOM C, VANCOUVER CONVENTION CENTRE

NEURAL BEST-BUDDIES: SPARSE CROSS-DOMAIN CORRESPONDENCE: A deep learning based method for sparse correspondence between pairs of objects that belong to different semantic categories and may differ drastically in their appearance, but contain semantically related parts.

DEEP CONVOLUTIONAL PRIORS FOR INDOOR SCENE SYNTHESIS: We present a convolutional neural network based approach that learns object placement priors from a semantically-enriched top-down representation of indoor scenes.

POINT CONVOLUTIONAL NEURAL NETWORKS BY EXTENSION OPERATORS: This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds.

LEARNING LOCAL SHAPE DESCRIPTORS FROM PART CORRESPONDENCES WITH MULTI-VIEW CONVOLUTIONAL NETWORKS: We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems, such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching.

SEMANTIC SOFT SEGMENTATION: Semantic soft segmentation is fully automatic decomposition of an image into a set of layers that correspond to semantically meaningful regions with accurate soft transitions for image editing and compositing.

LAYERS, GLINTS AND SURFACE MICROSTRUCTURE (TECHNICAL PAPERS), 10:45AM – 12:35AM, WEST BUILDING, ROOM 211-214, VANCOUVER CONVENTION CENTRE

GAUSSIAN MATERIAL SYNTHESIS: This work presents a learning-based system for rapid mass-scale material synthesis and visualization.

FUTURE ARTIFICIAL INTELLIGENCE AND DEEP LEARNING TOOLS FOR VFX (PANEL), 2PM – 3:30PM, EAST BUILDING, BALLROOM BC, VANCOUVER CONVENTION CENTRE

This panel discusses trends and prospects for using AI tools in the VFX pipeline. Panel experts will talk about the current AI tools that work in the industry, give answers to questions, and share their vision of their technology development.

Wednesday, August 15th

FLUIDS 2: VORTEX BOOGALOO (TECHNICAL PAPERS), 9AM – 10:30AM, WEST BUILDING, ROOM 211-214, VANCOUVER CONVENTION CENTRE

TEMPOGAN: A TEMPORALLY COHERENT, VOLUMETRIC GAN FOR SUPER-RESOLUTION FLUID FLOW: We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows. Our work represents a first approach to synthesize four-dimensional physics fields with neural networks.

FLUID DIRECTED RIGID BODY CONTROL USING DEEP REINFORCEMENT LEARNING: We present a learning-based method to control a coupled 2D system involving both fluid and rigid bodies. Our controller is a general neural-net, which is trained using deep reinforcement learning.

SKETCHING (TECHNICAL PAPERS), 9AM – 10:30AM, WEST BUILDING, ROOM 109-110, VANCOUVER CONVENTION CENTRE

FACESHOP: DEEP SKETCH-BASED FACE IMAGE EDITING: We present an interactive sketch-based image editing system for faces based on a convolutional neural network. Our proposed architecture and training procedure renders high quality and semantically consistent images.

DEEP THOUGHTS ON HOW THINGS MOVE (TECHNICAL PAPERS), 2PM – 3.30PM, WEST BUILDING, ROOM 211-214, VANCOUVER CONVENTION CENTRE

FAST AND DEEP DEFORMATION APPROXIMATIONS: Our method uses deep learning to approximate mesh deformations of film-quality characters, which allows the character rigs to run at interactive rates on consumer-quality devices.

LEARNING FOR RENDERING AND MATERIAL ACQUISITION (TECHNICAL PAPERS), 3:45PM – 5:35PM, WEST BUILDING, BALLROOM C, VANCOUVER CONVENTION CENTRE

DENOISING WITH KERNEL PREDICTION AND ASYMMETRIC LOSS FUNCTIONS: We describe a modular architecture for denoising rendered images based on kernel predicting networks, and introduce asymmetric loss functions that provide artistic control over the bias/variance trade-off in denoising.

DEEP IMAGE-BASED RELIGHTING FROM OPTIMAL SPARSE SAMPLES: A learning-based approach achieves image-based relighting under an environment map from only five captured images. A sampling network allows jointly learning both the optimal sampling directions and the relighting function.

EFFICIENT REFLECTANCE CAPTURE USING AN AUTOENCODER: We present a novel autoencoder-based framework that automatically learns the lighting patterns for efficient reflectance acquisition as well as how to reconstruct reflectance from measurements under such patterns.

SINGLE-IMAGE SVBRDF CAPTURE WITH A RENDERING-AWARE DEEP NETWORK: A deep learning method to capture spatially varying materials from a single picture. Our deep network uses procedural materials as training data, and image re-renderings as a measure of quality.

Thursday, August 16th

PIPELINES AND LANGUAGES FOR THE GPU (TECHNICAL PAPERS), 9AM – 10:30AM, WEST BUILDING, ROOM 109-110, VANCOUVER CONVENTION CENTRE

DIFFERENTIABLE PROGRAMMING FOR IMAGE PROCESSING AND DEEP LEARNING IN HALIDE: We enable high performance gradient computation with little programmer effort for image processing code, by extending the image processing language Halide with reverse mode automatic differentiation.

ANIMATION CONTROL (TECHNICAL PAPERS), 10:45AM – 12:35AM, WEST BUILDING, ROOM 109-110, VANCOUVER CONVENTION CENTRE

LEARNING BASKETBALL DRIBBLING SKILLS USING TRAJECTORY OPTIMIZATION AND DEEP REINFORCEMENT LEARNING: We present a method based on trajectory optimization and deep reinforcement learning for learning robust controllers for various basketball dribbling skills, such as dribbling between the legs, running, and crossovers.

DEEPMIMIC: EXAMPLE-GUIDED DEEP REINFORCEMENT LEARNING OF PHYSICS-BASED CHARACTER SKILLS: We present a deep reinforcement learning framework that enables simulated characters to imitate a rich repertoire of highly dynamic and acrobatic skills from reference motion clips.

LEARNING SYMMETRIC AND LOW-ENERGY LOCOMOTION: We introduce a new loss term and a physical assistance curriculum to deep reinforcement learning algorithm and demonstrate learning symmetric and low-energy locomotion for a variety of characters from scratch.

MODE-ADAPTIVE NEURAL NETWORKS FOR QUADRUPED MOTION CONTROL: We propose a data-driven approach for animating quadruped motion. The novel architecture called Mode-Adaptive Neural Networks can learn a wide range of locomotion modes and non-cyclic actions.

SHAPE ANALYSIS (TECHNICAL PAPERS), 10:45AM – 12:35PM, WEST BUILDING, ROOM 211-214, VANCOUVER CONVENTION CENTRE

PREDICTIVE AND GENERATIVE NEURAL NETWORKS FOR OBJECT FUNCTIONALITY: We develop predictive and generative deep convolutional neural networks to predict the functionality of an object by hallucinating the interaction or usage scenarios involving the object.

RENDERFARMS AND MACHINE LEARNING (BOF), 11AM – 12PM, EAST BUILDING, ROOM 11, VANCOUVER CONVENTION CENTRE

A discussion about how Machine Learning techniques can be applied to maximize render farm utilization and efficiency. Data pipelines, ML frameworks, operationalizing models, influencing the farm secheduler, assesing results.

OHOOO SHINY! (TALKS), 2PM – 3:30PM, EAST BUILDING, BALLROOM A, VANCOUVER CONVENTION CENTRE

AUTOMATIC PHOTO-FROM-PANORAMA FOR GOOGLE MAPS: We introduce a technique for extracting interesting photographs from 360-degree panoramas. We build on the success of CNNs for classification to train a model that scores a given view, using this score to find a best view. Finally, we refine the selected view by using an automated cropping technique.

PORTRAITS & SPEECH (TECHNICAL PAPERS), 2PM – 3:30PM, WEST BUILDING, BALLROOM C, VANCOUVER CONVENTION CENTRE

VISEMENET: AUDIO-DRIVEN ANIMATOR-CENTRIC SPEECH ANIMATION: We present VisemeNet, a novel deep-learning based approach that is able to produce animator-centric speech motion curves and automatically drive modern production face rigs directly from input audio alone.

HIGH-FIDELITY FACIAL REFLECTANCE AND GEOMETRY INFERENCE FROM AN UNCONSTRAINED IMAGE: We present a deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may contain partial occlusions and arbitrary illumination conditions.

DEEP VIDEO PORTRAITS: Our novel deep video portrait approach enables full control over a target actor by transferring head pose, facial expressions, and eye motion with a high level of photorealism.

MACHINE LEARNING AND RENDERING (COURSE), 2PM – 5:15PM, EAST BUILDING, BALLROOM BC, VANCOUVER CONVENTION CENTRE

Machine learning recently enabled dramatic improvements in both real-time and offline rendering. We review the principles and their relations to rendering. Besides fundamentals like the identity of reinforcement learning and the rendering equation, we cover efficient solutions to light transport simulation, participating media, noise removal, and future directions of research.

BODIES IN MOTION HUMAN PERFORMANCE CAPTURE (TECHNICAL PAPERS), 3:45PM – 5:15PM, WEST BUILDING, BALLROOM C, VANCOUVER CONVENTION CENTRE

ROBUST SOLVING OF OPTICAL MOTION CAPTURE DATA BY DENOISING: This research presents a technique that removes the need for motion capture cleanup by using a neural network trained to be extremely robust to errors in the input.

MONOPERFCAP: HUMAN PERFORMANCE CAPTURE FROM MONOCULAR VIDEO: We present the first monocular approach for temporally coherent 3D human performance capture of a human with general clothing, which reconstructs both articulated human skeleton motion and non-rigid surface deformations.

ONLINE OPTICAL MARKER-BASED HAND TRACKING WITH DEEP LABELS: This paper proposes a real-time marker-based hand tracking system that enables dexterous hand interactions for complex tasks and subtle motions with robustness to occlusions, ghost markers, and hand sizes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.