Computer Graphics Workshop '96 Lecture Notes


Today's topics
Inventor application model

So far we have been concerned only with understanding some of the facilities Inventor provides for generating computer graphics. However, there is much more involved in a real 3D application than just the graphics: there must be interaction between the program and the user, and there must be some computation going on behind the graphics.

The model for a computer application that most of you are familiar with is probably the one in whch the program starts, does some computation, perhaps interacts with the user, and then ends. At all times, the flow of control of the program is in your (or your application's) hands.

Inventor applications reverse this model. Instead, Open Inventor controls the flow of control of the program, and the programmer registers callbacks for certain events. Inventor takes care of detecting these events, such as the picking of objects in the scene using the mouse, and calls a user-specified procedure when the events occur.

Let's examine two different ways of creating a simple animation in Inventor: first the "direct control" method, and later the better, Inventor-friendly method.

Direct control of program - disadvantages

Let's write a short program to make a sphere orbit a cone.
(define viewer (new-SoXtExaminerViewer))
(-> viewer 'show)

(define pi 3.1415926585)

(define root (new-SoSeparator))
(-> root 'ref)
(-> viewer 'setSceneGraph root)
(-> root 'addChild (new-SoCone))

(define transform (new-SoTransform))
(-> root 'addChild transform)
(-> root 'addChild (new-SoSphere))

(define (loop radius step-number steps-per-revolution)
  (let ((x (* radius (cos (* 2 pi
			     (/ step-number
	(z (* radius (sin (* 2 pi 
			     (/ step-number
    (-> (-> transform 'translation) 'setValue x 0.0 z)
    (if (< (1+ step-number) steps-per-revolution)
	(loop radius (1+ step-number) steps-per-revolution)
	(loop radius 0 steps-per-revolution))))

(loop 3.0 0 30)  ; radius = 3, 30 steps per revolution
What happens? Nothing, or so it seems. The problem is that the loop is running without giving Inventor a chance to update the viewer. Let's insert the following line after the setting of the translation field in the transform node:
(-> viewer 'render)
Now we can see the sphere rotating around the cone, but note that we can not interact with the scene, and can not type at the Scheme interpreter. While we succeeded in our goal of animating the scene, we did so in a way that disabled all other functionality that Inventor provides.

Callbacks and the Inventor Xt mainloop

Why did the above method of application design not work well? Specifically, why did interaction stop working once we had the scene being updated? The reason is as follows. The examiner viewer knows how to handle mouse-based interaction; when events such as button presses and mouse movements are received, the viewer handles their processing and moves the scene around. These events are created by the X window system and are translated into Inventor events by Inventor. In this context Inventor is a program, and the flow of control must return to that program in order for the events to be handled. Since we are stuck in our "loop" procedure, Inventor does not have a chance to run, and the mouse events never get processed.

Inventor's "loop" procedure is called the Xt main loop; the actual function is called SoXt::mainLoop. This procedure is called before the Scheme interpreter starts; in fact, the interpreter is run from inside this main loop. When the user is not actively interacting with the interpreter, the Inventor main loop is still running, processing events and rendering scenes. If the interpreter goes into an infinite loop, as in the above example, the Inventor main loop stops.

The solution to our problem is to create a callback; Inventor will "call us back" when our program is allowed to run. By making this concession, that is, losing the "on-demand" feel of Inventor, we gain all of its functionality for interactivity.


Sensors are Inventor objects which detect certain occurrences and call user-specified functions when they happen. Specifically, sensors can detect:

Timer sensors

Let's redo the above animation using a timer sensor for the animation.
(define viewer (new-SoXtExaminerViewer))
(-> viewer 'show)

(define pi 3.1415926585)

(define root (new-SoSeparator))
(-> root 'ref)
(-> viewer 'setSceneGraph root)
(-> root 'addChild (new-SoCone))

(define transform (new-SoTransform))
(-> root 'addChild transform)
(-> root 'addChild (new-SoSphere))

(define *steps-per-revolution* 30)
(define *radius* 3.0)

(define sensor-cb-func
  (let ((step-number 0))
    (lambda (user-data sensor)
      (let ((x (* *radius* 
		  (cos (* 2 pi
			  (/ step-number
	    (z (* *radius* 
		  (sin (* 2 pi
			  (/ step-number
	(-> (-> transform 'translation) 'setValue x 0.0 z)
	(if (< (1+ step-number) *steps-per-revolution*)
	    (set! step-number (1+ step-number))
	    (set! step-number 0))))))

;; set up callback

(define callback-info (new-SchemeSoCBInfo))
(-> callback-info 'ref)
(-> (-> callback-info 'callbackName) 
    'setValue "sensor-cb-func")

(define timer-sensor
  (new-SoTimerSensor (get-scheme-sensor-cb) 
		     (void-cast callback-info)))
; could alternatively write the above line like this:
;(define timer-sensor (new-SoTimerSensor))
;(-> timer-sensor 'setFunction (get-scheme-sensor-cb))
;(-> timer-sensor 'setData (void-cast callback-info))
(-> timer-sensor 'setInterval
    (new-SbTime (/ 1.0 30.0))) ; repeat 30 times/sec

;; start animating
(-> timer-sensor 'schedule)
The first thing we notice is the call of the schedule method of "timer-sensor". This call puts this sensor on the timer queue. There are two sensor queues in Inventor: the timer queue and the delay queue. The timer queue contains all scheduled alarm and timer sensors, sorted by time until activation; the delay queue contains all other types of sensors. The call of
(-> timer-sensor 'schedule)
tells Inventor that this sensor should be triggered at some time in the future. Because this type of sensor is designed to be triggered multiple times, we need only schedule it once. Sensors designed to go off only once (SoOneShotSensor, SoIdleSensor, SoAlarmSensor) are not automatically rescheduled.

The second thing we notice about the above program is the callback function "sensor-cb-func". This function takes two arguments: the first is of type void *, or a generic C pointer; the second is a pointer to the sensor which called the function. Note how we found this function template; from the SoSensor manual page:

typedef void SoSensorCB(void *data, SoSensor *sensor)

This means that a valid sensor callback has no return value, and takes a void pointer and an SoSensor pointer as arguments. From Scheme's perspective, this means that the function must take two arguments, and that when it is called the arguments' values will have the above types.

The last thing we notice about the above sensor setup procedure is the (unfortunately messy) procedure for actually setting up the sensor callback from within Scheme. In C++, the setFunction method takes as its argument a function pointer; unfortunately, this notion is completely incompatible with the Scheme notion of closures. Instead we provide wrapper functions written in C++ whose only purpose is to call a provided Scheme function. The call to (get-scheme-sensor-cb) returns a valid function pointer which can be used as the argument to setFunction. This function expects as its data argument an object of type SchemeSoCBInfo. This class has the following data members:

SoSFString callbackName;
SoSFNode affectsNode;

The function that (get-scheme-sensor-cb) provides takes, as all sensor callbacks must, a void pointer as its first argument and a sensor pointer as its second. It first casts the void pointer to a SchemeSoCBInfo, and extracts the function name field. It then calls this Scheme function by name with the first argument being the node named in the affectsNode field, type cast to a void pointer. The second argument is the sensor which called this function.

When attempting to use the callback data (the affectsNode field of the SchemeSoCBInfo class), the following precaution must be taken. Because an SoSFNode field increments the reference count of the node it contains, you can not simply type cast any data type to an SoNode when calling the setValue method of this class. For this reason, when using Scheme it is easier to make all variables that a callback is likely to use global variables, rather than relying on the callback's user data argument.

Idle sensors

Idle sensors (SoIdleSensor) are called by Inventor whenever the CPU is idle. This might allow a low-priority task to be called whenever there is nothing better to do. However, if an idle sensor is rescheduled from within its callback, the CPU will never become "idle". Instead, it will keep getting called as fast as possible under the constraints of the rest of the Inventor application. For example, all interaction with viewers will continue to work.

One example of an idle sensor in action is the Scheme interpreter itself. Before the Inventor main loop is entered by the Scheme application, ivyscm, an idle sensor is scheduled. The callback for this sensor first reschedules the sensor, and then checks the standard input to see if the user has typed anything. If something has been entered, it calls the Scheme interpreter to evaluate the expression; if not, it immediately returns, allowing the rest of the application to continue execution.

Node and field sensors

SoFieldSensor and SoNodeSensor are two types of data sensors; they detect when a value has changed in a field or node. More specifically, a field sensor is attached to a field and calls its callback when that value changes; a node sensor is attached to a node and is triggered when a field's value changes in that node or in any node below it, or when the layout of the scene graph changes below that node (i.e. by adding or removing nodes from the scene graph).

You might use a field or node sensor to propagate changes through your scene graph; for example, one object might change some value in another, which would cause a node sensor to be triggered, which would cause this new object to make changes to others. This method of updating objects in the scene graph is very user-controllable and is therefore suitable for very complicated relationships of objects, but has the disadvantage of being complicated to control. Next week we will discuss engines in more detail; these objects (some experimentation with which was included in the second problem set) can be used to make fairly simple constraints between the positions of objects that can be constructed once and ignored from then on.

Next lecture

Next time we will discuss two method of introducing true graphical interactivity into your application: selection and event processing.

Back to the CGW '96 home page

$Id: index.html,v 1.5 1996/01/17 19:08:19 kbrussel Exp $