-- MODELS ---- MODELS ---- MODELS ---- MODELS ---- MODELS --

User Representation

Once Pfinder has located and analysed the user, there are several kinds of information that it can export to clients for a variety of uses.

By far the simplest output is the segmentation bitmap. This is often exported as a polygon, and is used to composite the user with real-time graphics.

A more detailed model of the user includes information about the composition of the user. The same gaussian blobs that are used to classify the image can also be useful as a model of the user.

By applying heuristic knowledge to the output of the segmentation system, Pfinder can locate several semantically interesting points on the body: the head, the hands, and the feet. Often this level of representation is the right one.

One use for these features is the analyisis of gesture. Pfinder implements a very simple gesture recognition system that provides such labels as: pointing, standing, and sitting. More sophisicated gesture systems are possible, and are the subject of current research.


It is possible to place the user in 3-D by measuring the camera parameters (position, orientation, and focal length), and assuming the user is always touching ground.
-- Vision -- Back up to Vision