The Data
Although our automatic tracker can obtain about
half of the player trajectories automatically, for our research on action recognition we
wanted to start with data for all of the players. Below is a screen shot from software we
wrote that allows a patient person to track all the players manually by following each
player (and the ball) with the mouse. The system also allows the user to track the
approximate orientation of each object.
This process is time consuming. In addition to
all the players and their directions, the user must track some field points so the data
can be transformed to the field coordinate system. The camera is panning and zooming
throughout the play, though, and by the time the play is tracked there is actually
significant error in the signal. The player motions are quite jittery when played back,
even when the players in the original video are motionless.
Shown below are additional problems with the
data. two problems. In the left image a player (the SS) has fallen. Where should the
person tracking the play mark the player position? A second problem is shown on the right.
Where is the SS player exactly? Small shifts in the position lead to larger changes in the
rectified data, but the person tracking needs to estimate where the center of the person
is projected down onto the field in the video. There are other problems as well, but in
short, it's sufficient to note that even though this data is acquired manually, it's
actually quite noisy. The data that will be available within the next few years from
companies like Trakus that use embedded helmet
transmitters will provide far more accurate position and orientation information.
Once the data is obtained, we
can display transform it and display the play on the field coordinate system, indicating
player labels (also manually indicated) and object orientation.
The data overlaid on one video
frame:
More data sequences
(only try this if you have a high-speed connection).
The data can also be displayed as
"chalkboard" images. Here's some automatically obtained data from the tracking system. Only some trajectories are available. Below, for
comparison, are two fully-tracked plays generated using the manual tracking software.
So that's the data. We'd like to
develop vision algorithms that can obtain all this data automatically, but for now we are
using manually obtained trajectory information. In a few years, though, we are going to
have systems that output perfect trajectories. The question, then, is what can we do with
them? |