Modeling of Close Interactions between Characters for Multiple Character Motion Synthesis and Interactive Application

Abstract

Nowadays motion capture is widely used in entertainment such as animations, movies, game, etc. As capturing is difficult and expensive, the captured data should be utilized wisely. In this talk, we will report our work on developing novel methods to analyze the captured data for different applications and synthesize new multiple-character motion data using existing motion data.

On the analysis side, we will present methods to analyze the captured data for motion training, virtual partner control and retrieval. For motion training, we track the real-time data of users and compare them with pre-captured data of professionals. For virtual partner control, we applied kd-tree indexing and k-nearest neighboring method to identify the real-time captured pose of the user. As the user's pose may be different from the pose in the database, the pose pair in the database is adapted to the user's pose in real time while the spatial relationships in the pose pair are preserved. On the other hand, we proposed a motion retrieval method which searches for the existing two-character motions with similar context of interaction to the query motion data based on the spatial features measured by Laplacian coordinates and topology structure.

On the synthesis side, we will present a novel method for synthesizing new two-character motion by merging two existing two-character motion data without the need to specify the constraints manually. The output two-character motions are synthesized by space time optimization to preserve the spatial relationship and the local detail of individual character. We further extend the method so that user can create a scene with more than two characters interacting with each other. User can design the scene with any number of characters and control the interaction between the characters. The output animations showed that our method is able to create a scene of multiple character interactions in which the characters' positions are allocated properly and they interact with each other like the characters in the inputs.

Speaker

Dr. Howard Leung
Department of Computer Science, City University of Hong Kong

Date & Time

30 Oct 2014 (Thursday) 17:00 - 18:00

Venue

E11-4045 (University of Macau)

Organized by

Department of Computer and Information Science

Biography

Howard Leung is currently an Assistant Professor in the Department of Computer Science at City University of Hong Kong. He received the B.Eng. degree in Electrical Engineering from McGill University, Canada, in 1998, the M.Sc. degree and the Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University in 1999 and 2003 respectively. His current research projects include Interactive Applications with Motion Capture, Synthesis of Single Character Motions with Variations, Synthesis of Multiple Character Interacting Motions, Brain Informatics, Intelligent Tools for Chinese Handwriting Education, Web-Based Learning Technologies. Recently he has received the Best Paper Award for his paper titled “Spatial temporal pyramid matching using temporal sparse representation for human motion retrieval” at the CGI 2014 Conference in Sydney, Australia. He is supervising the 3D Motion Capture Laboratory (http://mocap.cs.cityu.edu.hk) which promotes the 3D Motion Capture technology for research and teaching at City University of Hong Kong. At present, he is the Treasurer of Hong Kong Web Society.