Learning Generative Models of 3D Structures
Eurographics 2019 Tutorial
Many important applications demand 3D content, yet 3D modeling is a notoriously difficult and inaccessible activity. This tutorial provides a crash course in one of the most promising approaches for democratizing 3D modeling: learning generative models of 3D structures. Such generative models typically describe a statistical distribution over a space of possible 3D shapes or 3D scenes, as well as a procedure for sampling new shapes or scenes from the distribution. To be useful by non-experts for design purposes, a generative model must represent 3D content at a high level of abstraction in which the user can express their goals---that is, it must be structure-aware. In this tutorial, we will take a deep dive into the most exciting methods for building generative models of both individual shapes as well as composite scenes, highlighting how standard data-driven methods need to be adapted, or new methods developed, to create models that are both generative and structure-aware. The tutorial assumes knowledge of the fundamentals of computer graphics, linear algebra, and probability, though a quick refresher of important algorithmic ideas from geometric analysis and machine learning is included. Attendees should come away from this tutorial with a broad understanding of the historical and current work in generative 3D modeling, as well as familiarity with the mathematical tools needed to start their own research or product development in this area.
|Introduction (pdf)||Hao "Richard" Zhang||13:30 - 14:15|
|Geometric and Generative Modeling Basics (pdf, pptx)||Siddhartha Chaudhuri||14:15 - 15:00|
|Break||15:00 - 15:30|
|Deep Hierarchical Models for 3D Shapes (pdf, pptx)||Kai "Kevin" Xu||15:30 - 16:10|
|Generative Models of 3D Scenes (pdf, pptx)||Daniel Ritchie||16:10 - 17:00|
Siddhartha Chaudhuri is Senior Research Scientist in the Creative Intelligence Lab at Adobe Research, and Assistant Professor of Computer Science and Engineering at IIT Bombay. He obtained his Ph.D. from Stanford University, and his undergraduate degree from IIT Kanpur. He subsequently did postdoctoral research at Stanford and Princeton, and taught for a year at Cornell. Siddhartha's work combines geometric analysis, machine learning, and UI innovation to make sophisticated 3D geometric modeling accessible even to non-expert users. He also studies foundational problems in geometry processing (retrieval, segmentation, correspondences) that arise from this pursuit. His research themes include probabilistic assembly-based modeling, semantic attributes for design, and generative neural networks for 3D structures. He is the original author of the commercial 3D modeling tool Adobe Fuse, and has taught tutorials on data-driven 3D design and shape ``semantics.''
Kai (Kevin) Xu is an Associate Professor at the School of Computer Science, National University of Defense Technology, where he received his Ph.D. in 2011. He conducted visiting research at Simon Fraser University (2008-2010) and Princeton University (2017-2018). His research interests include geometry processing and geometric modeling, especially on data-driven approaches to the problems in those directions, as well as 3D vision and its robotic applications. He has published over 60 research papers, including 21 SIGGRAPH/TOG papers. He organized a SIGGRAPH Asia course and a Eurographics STAR tutorial, both on data-driven shape analysis and processing. He is currently serving on the editorial board of Computer Graphics Forum, Computers \& Graphics, and The Visual Computer. He also served as paper co-chair of CAD/Graphics 2017 and ICVRV 2017, as well as PC member for several prestigious conferences including SIGGRAPH, SIGGRAPH Asia, SGP, PG, GMP, etc. Kai has made several major contributions to structure-aware 3D shape analysis and modeling with data-driven approaches, and recently with deep learning methods.
Daniel Ritchie is an Assistant Professor of Computer Science at Brown University. He received his PhD from Stanford University, advised by Pat Hanrahan and Noah Goodman. His research sits at the intersection of computer graphics and artificial intelligence, where he is particularly interested in data-driven methods for designing, synthesizing, and manipulating visual content. In the area of generative models for structured 3D content, he co-authored the first data-driven method for synthesizing 3D scenes, as well as the first method applying deep learning to scene synthesis. He has also worked extensively on applying techniques from probabilistic programming to procedural modeling problems, including to learning procedural modeling programs from examples. In related work, he has developed systems for inferring generative graphics programs from unstructured visual inputs such as hand-drawn sketches.
Hao (Richard) Zhang is a professor in the School of Computing Science at Simon Fraser University, Canada. He obtained his Ph.D. from the Dynamic Graphics Project (DGP), University of Toronto, and M.Math. and B.Math degrees from the University of Waterloo, all in computer science. Richard's research is in computer graphics with special interests in geometric modeling, analysis and synthesis of 3D contents (e.g., shapes and indoor scenes), machine learning (e.g., generative models for 3D shapes), as well as computational design, fabrication, and creativity. He has published more than 120 papers on these topics. Most relevant to the proposed tutorial topic, Richard was one of the co-authors of the first Eurographics STAR on structure-aware shape processing and taught SIGGRAPH courses on the topic. With his collaborators, he has made original and impactful contributions to structural analysis and synthesis of 3D shapes and environments including co-analysis, hierarchical modeling, semi-supervised learning, topology-varying shape correspondence and modeling, and deep generative models.
Thanks to visualdialog.org for the webpage format.