next up previous
Next: Display Up: Initial Integration: Ogg That Previous: Perception: Deictic Gestures

Interpretation

The Interpretation module reads features from the vision system described above as well as an adaptive speech system developed by Deb Roy [20]. These feature streams must then be converted to game actions: commands to the robots or configuration of the display. The Interpretation module can query game state via the database engine built into the Display module, and can query individual robots regarding their internal state.

Currently this module is implemented as a finite state machine implementing the subject-verb-object grammar used in Ogg That There. The adaptive speech system is pre-loaded with a set of verbs, literal nouns and demonstrative pronouns. During the game speech events advance the state of the interpreter. If a demonstrative pronoun is encountered the interpreter resolves the gesture features in screen coordinates as described above. Those screen coordinates are then combined with grammatical constraints on valid referents from the current state of the FSM to generate a query on the Display database. Once a full sentence is parsed and all referents are instantiated, a command is issued to the appropriate robot.




1999-06-15