tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow

Generative Adversarial Networks (GANs) have been on a tear these last few months, providing rapid advances particularly in the field of realistic image generation. This application to fluids is interesting not only because it extends the architecture into 3D but also for what it allows. It’s long been desired to use low resolution proxy layout of long simulations followed by a post-process to upscale and produce more detail without the expense of re-simulation. Whilst some up-resolution techniques do exist the fact that this paper achieves that in way which is physically plausible makes it all the more exciting, and their work is sure to lead to more research in this area. Long training times may be an issue today but are less likely to be as much of concern in a few years.

We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows. Our work represents a first approach to synthesize four-dimensional physics fields with neural networks. Based on a conditional generative adversarial network that is designed for the inference of three-dimensional volumetric data, our model generates consistent and detailed results by using a novel temporal discriminator, in addition to the commonly used spatial one. Our experiments show that the generator is able to infer more realistic high-resolution details by using additional physical quantities, such as low-resolution velocities or vorticities. Besides improvements in the training process and in the generated outputs, these inputs offer means for artistic control as well. We additionally employ a physics-aware data augmentation step, which is crucial to avoid overfitting and to reduce memory requirements. In this way, our network learns to generate advected quantities with highly detailed, realistic, and temporally coherent features. Our method works instantaneously, using only a single time-step of low-resolution fluid data. We demonstrate the abilities of our method using a variety of complex inputs and applications in two and three dimensions.


Are you aware of some research that warrants coverage here? Contact us or let us know in the comments section below!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.