Stable Spaces for Real-time Clothing


People

Abstract

We present a technique for learning clothing models that enables the simultaneous animation of thousands of detailed garments in real-time. This surprisingly simple conditional model learns and preserves the key dynamic properties of a cloth motion along with folding details. Our approach requires no a priori physical model, but rather treats training data as a 'black box'. We show that the models learned with our method are stable over large time-steps and can approximately resolve cloth-body collisions. We also show that within a class of methods, no simpler model covers the full range of cloth dynamics captured by ours. Our method bridges the current gap between skinning and physical simulation, combining benefits of speed from the former with dynamic effects from the latter. We demonstrate our approach on a variety of apparel worn by male and female human characters performing a varied set of motions typically used in video games (e.g., walking, running, jumping, etc.).



Snapshot for paper Edilson de Aguiar, Leonid Sigal, Adrien Treuille, Jessica K. Hodgins
"Stable Spaces for Real-time Clothing"
In Transactions on Graphics (SIGGRAPH'10), July 2010
[PDF (1.2MB)] [BiBTeX]
[ New Video (MP4, 38MB)] [ Submission Video (MP4, 63MB)]

Erratum: In the stability section, the vector length should not be squared right below equation 9.

Funding

This research is supported in part by NSF CCF-0702556 and the Intel Corporation.