next up previous
Next: Learning The Scene Up: published in IEEE Transactions Previous: The Analysis Loop

Initialization

Pfinder's initialization process consists primarily of building representations of the person and the surrounding scene. It first builds the scene model by observing the scene without people in it, and then when a human enters the scene it begins to build up a model of that person.

The person model is built by first detecting a large change in the scene, and then building up a multi-blob model of the user over time. The model building process is driven by the distribution of color on the person's body, with blobs added to account for each differently-colored region. Typically separate blobs are required for the person's hands, head, feet, shirt and pants.

The process of building a blob-model is guided by a 2-D contour shape analysis that recognizes silhouettes in which the body parts can be reliably labeled. For instance, when the user faces he camera and extends both arms (what we refer to as the ``star fish'' configuration) then we can reliably determine the image location of the head, hands, and feet. When the user points at something, then we can reliably determine the location of the head, one hand, and the feet.

These locations are then integrated into blob-model building process by using them as prior probabilities for blob creation and tracking. For instance, when the face and hand image positions are identified we can set up a strong prior probability for skin-colored blobs.

The following subsections describe the blob-model building process in greater detail.





Christopher R. Wren
Tue Feb 24 22:06:47 EST 1998