next up previous
Next: Soft Constraints Up: Hard Constraints Previous: Hard Constraints

Biconjugate Gradient Decent

The linear system in Equation 5 is large. Each rigid element in a 3-D model contributes 6 variables to the length of ${\bf q}$. Each constraint contributes a number of variable to the length of ${\mbox{\boldmath$\ \lambda $}}$ equal to the number of unconstrained degrees of freedom. So for a 5-link upper-body model, ${\bf W}$ is rank 30, and ${\mbox{\boldmath$\ \lambda $}}$ is length 15.

Fortunately the equation has some special structure. ${\bf W}$ is block diagonal and is therefore easy to compute from it's block diagonal inverse, the system mass matrix ${\bf M}$. The constraint Jacobian ${\bf J}$ is sparse when the number of constraints is O(n), where n is the dimension of q.

The biconjugate gradient descent method for sparse matrix systems[18] takes advantage of this special structure to solve for ${\mbox{\boldmath$\ \lambda $}}$ stably and efficiently. The algorithm works by iteratively searching for the minimum of the potential field:

\begin{displaymath}
{\cal F}(x) = \frac{1}{2} {\bf x}^{T} {\bf A} {\bf x} - {\bf b}^{T} {\bf x}
\end{displaymath}

This minimum occurs at the point where the gradient is zero:

\begin{displaymath}
\nabla {\cal F} = {\bf A} {\bf x} - {\bf b} = 0
\end{displaymath}

For the constraint satisfaction problem ${\bf x}$ is the ${\mbox{\boldmath$\ \lambda $}}$ we wish to find, ${\bf A}$ is the sparse matrix $-{\bf J}^{T}{\bf W}{\bf J}$, and ${\bf b}$ is $ \kappa $ from Equation 5.


next up previous
Next: Soft Constraints Up: Hard Constraints Previous: Hard Constraints

1999-02-13