I'm interested in machine learning, automatic gesture recognition,
perception, character animation, performance animation, statistical pattern
recognition, and visual design. A deep question: How are we able to effortlessly anthropomorphize forms and motion? That is, how do we understand the motions of the desk lamps in the film short Luxo Jr. as those of a mother (father?) and child? Some other amazing examples. Here are some projects that happen to have web artifacts:
|
Watch And Learn, a computer vision system that
learns your gestures to control a musical score.
|
Swamped! an interactive experience in which instrumented plush toys
are used as a tangible, iconic interface for directing autonomous animated
characters. Shown at SIGGRAPH '98.
|
Marionette, a performance
animation system designed around an interactive training paradigm,
with a little inspiration from puppetry. (under development)
|
The
KidsRoom: an interactive, narrative play space put together by
a bunch of us.
|
Luxomatic, a performance
animation system driven by computer vision techniques.
|
At 1996 Siggraph Digital Bayou I
helped out the crew put together our
Smart Spaces booth, including two performance animation demos.
|
The Seagull is a simple performance animation demo in which the user trains the mapping from body configuration to character configuration. |
Stephen Intille
and I spent some time hacking around with Bolo, a
networked multiplayer game for the Mac. The idea is to teach a robot
tank to help you in the game. The real question is "how do we
recognize actions?".Check
it out.
|
Before Toy
Story there was Luxo Jr. In some ways Luxo Jr.
is the more impressive of the two: I give up, why is that, Andy? (Unfortunately only those from within the Media Lab can
access it).
|