We present a new environment for the development of situated vision and behavior algorithms. Our environment allows an unencumbered person to interact with autonomous agents in a simulated graphical world, though the use of situated vision techniques. An image of the participant is composited together with the graphical world and projected onto a large screen in front of the participant. No goggles, gloves, or wires are needed; agents and objects in the graphical world can be acted upon by the human participant through the use of domain-specific computer vision techniques that analyze the image of the person. The agents inhabiting the world are modeled as autonomous behaving entities which have their own sensors and goals and which can interpret the actions of the participant and react to them in real-time. We have demonstrated and tested our system with two prototypical worlds and describe the results obtained with over 500 people.