There has been over the past few years a significant surge of interest for unmanned aerial vehicles and their operation in civilian and military operations. This talk will concentrate on a research effort to improve the flying qualities of a small autonomous helicopter through a research program combining theoretical developments and flight tests aimed at learning control strategies from expert human pilots. Further research efforts on automated path planning and collision avoidance for these machines will be outlined.
Eric Feron is an Associate Professor of Aeronautics and Astronautics at the Laboratory for Information and Decision Systems Operations Research Center. For more information, see his web page at: http://web.mit.edu/~feron/Public/www
Sparse Greedy Methods for Learning
Large datasets impose serious storage and computation requirements on any sort of kernel method such as Regularization Networks or Support Vector Machines. This is so since the number of basis functions (i.e. kernels) required for an optimal solution equals the number of samples. We present a sparse greedy technique to approximate the full set of basis functions by a subset thereof which can dealt with more easily with no observable change in accuracy. Experimental results show that improvements in speed of a factor of 10 and more in comparison to existing algorithms are easily obtained. In particular we show how both matrices and single elements in Hilbert space can be approximated efficiently.
This is joint work with Peter Bartlett and Bernhard Scholkopf
Alexander J. Smola is currently in the Department of Engineering and RSISE Australian National University. For more information please visit his web page: http://spigot.anu.edu.au/~smola/
"Using Principled Statistical Methods to Unravel the Genetic Regulatory Networks inside Cells"
The genetic information of a cell is contained in the sequence of base pairs in its DNA. The cell transcribes the information in these genes into the form of RNA, which is subsequently translated into protein. Proteins perform a host of important roles in the cell; from a computer scientist's perspective, proteins are responsible for input/output, power supply, message passing, processing, and even the cell's structure.
In addition to these roles, proteins play one other critical role: control. Proteins are necessary for regulating which genes are transcribed and translated, thereby creating a feedback mechanism in which they control their own existence. These genetic regulatory networks are poorly understood. While many mechanisms of regulation have been uncovered and some small networks are known, most of the interconnections between combinations of proteins and combinations of genes remain to be discovered. Our group's raison d'etre is to make progress unraveling these networks through the application of statistical models.
In this talk, I will present some of the constraints inherent in the problem, as well as some of the issues that we have encountered in tackling it. Our methods are diverse, but we focus primarily on graphical models since they are highly interpretable models in this domain, yet remain sufficiently powerful to capture complex biological relationships. I expect that this talk will be approachable to the layman, and there will be plenty of room for discussion and questions. Suggestions from the audience will be welcomed.
Our group is a collaboration between Professors David Gifford, Richard Young, and Tommi Jaakkola, and includes also Tarjei Mikkelsen and Chen-Hsiang Yeang.
Alex Hartemink is a Ph.D. candidate in computer science, optimistic that he is nearly finished. His calico background includes degrees in mathematics, physics, economics, and computer science. Past research efforts include building computers out of DNA, cryptography, studying human choice and preference, game theory, stabilizing chaotic systems, modeling the Federal Reserve's interactions with Congress, and various mathematical conundrums. His science fair projects in middle school dealt with increasing the longevity of cut flowers and extra-sensory perception.
by Brian Scasselatti
If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a ``theory of mind.'' This paper presents the theories of Leslie (1994) and Baron-Cohen (1995) on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.
Mathematical Models of the Perception of Facial Expressions of Emotion
by Alain Mignault
In this talk, I present the results of a categorization task where subjects were asked to categorize facial expressions in different emotional classes and of a similarity task where subjects had to evaluate the similarity between pairs of facial expressions. I measured the entropy of the responses and found interesting properties. The use of principal component analysis to study face images is briefly reviewed in order to introduce two connectionist models: one for the categorization task and one for the similarity task. The models predict mean response times, response probabilities, and entropy. An unexpected finding involving ambiguous stimuli is also discussed.
Relating Human Actions and Intentions: A Look at Eye Movements
by Dario Salvucci
People continually infer others' intentions from their actions, sometimes in mundane ways (e.g., watching a friend wave "hello"), sometimes in complex and subtle ways (e.g., watching a poker player for signs of bluffing). In this talk I will discuss a long-term project aimed at understanding and formalizing the link between actions and intentions. Our approach emphasizes the integrated development of two types of models: generative models that predict behavior given a model of thoughts and intentions, and interpretive models that map behavior back to the thoughts and intentions that produced them. As an example of this approach, I will describe some of our recent work on human eye movements, including interpretive models specified as hidden Markov models and a generative model grounded in the ACT-R cognitive architecture.
I recently completed a Ph.D. in Computer Science at Carnegie Mellon University, where my research included work in user interfaces, analogy, equation solving, and reading under the direction of John Anderson. I joined Cambridge Basic Research in September 1999.
Evolution of Avian Intelligence,
with an Emphasis on Grey Parrots
Irene M. Pepperberg
University of Arizona
Determining what constitutes avian intelligence, much less what selection pressures might have shaped the cognitive architecture that underlies intelligent behavior, is a daunting task. Even after two decades of examining the cognitive and communicative abilities of Grey parrots (Psittacus erithacus} and of following colleagues studies of the capacities of other avian species, I have more questions than answers. But of these many questions, five interrelated ones, three general and two specific to birds, appear particularly relevant to discussions of nonhuman intelligence: First, what actually is intelligence? Second, can we judge nonhuman capacities using human tasks and definitions? Third, how do we fairly test creatures with different sensory systems from ours? Fourth, how does a nonmammalian brain process information? Fifth, to what extent do avian cognitive capacities match those of mammals? Ongoing studies provide only preliminary answers to the first four questions, but considerable data exist to respond to the fifth. To summarize the state of our knowledge, I examine concepts of intelligence, different types or specializations of intelligence, and review the history of avian cognitive research and studies that indicate advanced cognition, with an emphasis on Grey parrots. I then present some ideas concerning the evolution of intelligence in parrots and possibly other birds.
Visiting Associate Professor
MIT Media Lab. and Cambridge Basic Research
(on leave from Michigan State University,
East Lansing, MI 48824)
We humans have made a very impressive history of making man-made devices using an engineering paradigm that we now take for granted --- the given-task paradigm. With this paradigm, the task to be executed by man-made devices is given and it is the engineers who understand the given task instead of machines. A new paradigm is needed for muddy tasks that humans do well but existing machines do not. This new paradigm is called developmental paradigm --- constructing developmental machines.
This new paradigm is motivated by human cognitive and behavioral development from infancy to adulthood. Central in the new paradigm is a new kind of algorithm, called developmental algorithm. The human developmental algorithm enables humans to develop mentally from infancy to adulthood. A developmental algorithm for machines enables the machines to learn new tasks without a need for re-programming. Development includes learning but needs more. It requires a human programmer to writes a program that enables a machine to learn subjects that the programmer does not understand, or even could not predict. Running such a developmental algorithm, the machine develops its cognitive and behavioral capabilities through real-time, online interactions with the environment (including humans), using its sensors and effectors. This paradigm unifies capabilities that we now consider very different, such as vision, speech, language, reasoning, planning, decision making, navigation, object manipulation, human-machine interactions, etc. This new paradigm raises a series of new interesting research issues. This talk will describe the SAIL project whose goal is to develop developmental machines using this new paradigm. Some results will be presented through a video presentation.
Visual aids needed:
Overhead transparency projector
VHS video tape player connected to a video display
------------- short bio -------------------
About the Speaker:
Juyang Weng received the BS degree from Fudan University, Shanghai, China in 1982, and the M.S. and Ph.D. degrees from University of Illinois, Urbana-Champaign, USA, in 1985 and 1989, respectively, all in computer science. Currently, he is a visiting associate professor at MIT Media Lab and Cambridge Basic Research, on sabbatical leave from Department of Computer Science and Engineering, Michigan State University, East Lansing, Michigan USA. He is a coauthor of the book ``Motion and Structure from Image Sequences'' (with T. S. Huang and N. Ahuja, Springer-Verlag, 1993). He is a program co-chair of the upcoming NSF/DARPA Workshop on Development and Learning to be held at Michigan State University in Spring 2000 (http://www.cse.msu.edu/dl/). His current research interests include computer vision; learning methods for vision, speech, language, robot manipulation and navigation; human-machine interface and automated mental development.