TY - JOUR
T1 - Grains
T2 - Generative recursive autoencoders for indoor scenes
AU - Li, Manyi
AU - Patil, Akshay Gadi
AU - Xu, Kai
AU - Chaudhuri, Siddhartha
AU - Khan, Owais
AU - Shamir, Ariel
AU - Tu, Changhe
AU - Chen, Baoquan
AU - Cohen-Or, Daniel
AU - Zhang, Hao
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/2
Y1 - 2019/2
N2 - We present a generative neural network that enables us to generate plausible 3D indoor scenes in large quantities and varieties, easily and highly efficiently. Our key observation is that indoor scene structures are inherently hierarchical. Hence, our network is not convolutional; it is a recursive neural network, or RvNN. Using a dataset of annotated scene hierarchies, we train a variational recursive autoencoder, or RvNN-VAE, which performs scene object grouping during its encoding phase and scene generation during decoding. Specifically, a set of encoders are recursively applied to group 3D objects based on support, surround, and co-occurrence relations in a scene, encoding information about objects' spatial properties, semantics, and relative positioning with respect to other objects in the hierarchy. By training a variational autoencoder (VAE), the resulting fixed-length codes roughly follow a Gaussian distribution. A novel 3D scene can be generated hierarchically by the decoder from a randomly sampled code from the learned distribution. We coin our method GRAINS, for Generative Recursive Autoencoders for INdoor Scenes. We demonstrate the capability of GRAINS to generate plausible and diverse 3D indoor scenes and compare with existing methods for 3D scene synthesis. We show applications of GRAINS including 3D scene modeling from 2D layouts, scene editing, and semantic scene segmentation via PointNet whose performance is boosted by the large quantity and variety of 3D scenes generated by our method.
AB - We present a generative neural network that enables us to generate plausible 3D indoor scenes in large quantities and varieties, easily and highly efficiently. Our key observation is that indoor scene structures are inherently hierarchical. Hence, our network is not convolutional; it is a recursive neural network, or RvNN. Using a dataset of annotated scene hierarchies, we train a variational recursive autoencoder, or RvNN-VAE, which performs scene object grouping during its encoding phase and scene generation during decoding. Specifically, a set of encoders are recursively applied to group 3D objects based on support, surround, and co-occurrence relations in a scene, encoding information about objects' spatial properties, semantics, and relative positioning with respect to other objects in the hierarchy. By training a variational autoencoder (VAE), the resulting fixed-length codes roughly follow a Gaussian distribution. A novel 3D scene can be generated hierarchically by the decoder from a randomly sampled code from the learned distribution. We coin our method GRAINS, for Generative Recursive Autoencoders for INdoor Scenes. We demonstrate the capability of GRAINS to generate plausible and diverse 3D indoor scenes and compare with existing methods for 3D scene synthesis. We show applications of GRAINS including 3D scene modeling from 2D layouts, scene editing, and semantic scene segmentation via PointNet whose performance is boosted by the large quantity and variety of 3D scenes generated by our method.
KW - 3D indoor scene generation
KW - Recursive neural network
KW - Variational autoencoder
UR - http://www.scopus.com/inward/record.url?scp=85062349751&partnerID=8YFLogxK
U2 - 10.1145/3303766
DO - 10.1145/3303766
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85062349751
VL - 38
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
SN - 0730-0301
IS - 2
M1 - a12
ER -