A recent post demonstrated a data-driven system for indoor scene synthesis. This work is motivated by similar but is noteworthy not only for its results – which allows for repeated inclusion of a given type of object and interpolation of entire scenes – but also the rigorous analysis of their approach which is bound to be of benefit to future researchers or even studios who might want to experiment with similar technology. Perhaps we are not that far from having the ability to perform layout of enormous environments in hours instead of days.
We present a deep generative scene modeling technique for indoor environments. Our goal is to train a generative model using a feed-forward neural network that maps a prior distribution (e.g., a normal distribution) to the distribution of primary objects in indoor scenes. We introduce a 3D object arrangement representation that models the locations and orientations of objects, based on their size and shape attributes. Moreover, our scene representation is applicable for 3D objects with different multiplicities (repetition counts), selected from a database. We show a principled way to
train this model by combining discriminator losses for both a 3D object arrangement representation and a 2D image-based representation. We demonstrate the effectiveness of our scene representation and the deep learning method on benchmark datasets. We also show the applications of this generative model in scene interpolation and scene completion.
Have you seen some amazing research which should be covered here? Contact us or let us know in the comments section below!