0% found this document useful (0 votes)
2 views

Lecture11

The document discusses numerical inverse kinematics, highlighting the challenges of finding closed-form solutions and the advantages of numerical methods. It covers techniques such as direct numerical search and Jacobian-based methods, emphasizing the importance of understanding singularities and manipulability in robotic arms. The text also introduces the concept of pseudo-inverses, particularly the Moore-Penrose pseudo-inverse, as a solution to handle singularities while acknowledging the limitations of these mathematical approaches in practical applications.

Uploaded by

4dd4kq627n
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture11

The document discusses numerical inverse kinematics, highlighting the challenges of finding closed-form solutions and the advantages of numerical methods. It covers techniques such as direct numerical search and Jacobian-based methods, emphasizing the importance of understanding singularities and manipulability in robotic arms. The text also introduces the concept of pseudo-inverses, particularly the Moore-Penrose pseudo-inverse, as a solution to handle singularities while acknowledging the limitations of these mathematical approaches in practical applications.

Uploaded by

4dd4kq627n
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Inverse Kinematics II

Glen Henshaw
Craig Carignan
March 6, 2025

1 Numerical Inverse Kinematics


It’s often difficult to impossible to find a closed–form solution for the inverse
kinematics of a given manipulator. But it’s always possible to solve the inverse
kinematics problem numerically. This has become more popular in the last
decade or so because embedded computers have become powerful enough to
enable it.
Numerical techniques are also often more flexible and powerful than closed–
form techniques. A good numerical approach can be agnostic to the manipulator
kinematics, for instance; you can give it the DH parameters of whatever arm
you are using, and it will work with few changes. You can even add or subtract
degrees of freedom.

2 Direct Numerical Search


You might be tempted to just set this up as a problem in joint space:

min ∥ O 0
W T − T (q)∥
q

or similar. This minimization problem could in theory be solved by gradient


descent or a quasi–Newton method.
This can work, but it often doesn’t. The issue is that the minimization
problem is nonlinear, and hence may have many locally minima. In general it has
no special structure that lends itself to being solved by nonlinear optimization
solvers. It is also difficult to bound the number of iterations needed to find the
solution, and hence in realtime applications it can be problematic.

3 Jacobian–based Techniques: Resolved–motion


Rate Control (RMRC)
A better idea is to think about the problem in terms of Cartesian velocities and
rates instead of positions and orientations. That lets us use the fact that the

1
Jacobian relationship:
ẋ = J(q)q̇
is linear with respect to the joint rates q̇. The problem, of course, is that the
Jacobian relationship goes the wrong way. We want to be able to fix ẋ and
calculate q̇.
You could consider just inverting the Jacobian:

q̇ = J −1 (q)ẋ

This is also, generally speaking, a bad idea. The most obvious problem is that
the Jacobian may not even be square: it is N × 6, where N is the number of
degrees of freedom of the manipulator. So if your manipulator is not 6–DOF,
you can’t take an inverse. But even if you do have a 6–DOF manipulator, simply
inverting the Jacobian is a bad idea, because square matrices can be singular.
So we’ll talk about what it means, both mathematically and mechanically, to
have a singular Jacobian.

3.0.1 An aside: some concepts from linear algebra


Consider the system
Ax = 0
where A is any matrix, square or nonsquare. If A is M by N , the set of all vectors
X that satisfy this equation form a subset of RN . This subset is nonempty, since
it clearly contains the zero vector: x = 0 always satisfies Ax = 0. This subset
actually forms a subspace of RN , called the nullspace of the matrix A and
denoted N (A). If x1 is in N (A), then, by definition, A(x1 + x2 ) = Ax2 for any
value of x2 .
In terms of robotic manipulators, the Jacobian will always have a nullspace.
For a 6–DOF manipulator, in general the nullspace will contain only the zero
vector. But for a manipulator with more than 6 DOF, the nullspace will contain
a subspace of joint rates that do not cause the end effector to move. In
other words, for nearly any arm pose q, you will be able to find an infinite
number of joint rate vectors q̇ such that J(q)q̇ = 0.
This is called “self–motion”. It is perhaps easiest to visualize with a 7–DOF
revolute arm: in this case, you can independently choose to rotate the elbow
without moving the end effector.
If A is square, it may or may not be invertible. A square matrix A that
is not invertible is called singular. This means that there exists at least some
vector x ∈ RN not equal to zero such that Ax = 0. A matrix that is singular
may also be referred to as a reduced–rank matrix, as distinguished from a full-
rank matrix. Although the formal mathematical concept of singularity is only
defined for square matrices, in robotics we use this term somewhat loosely. We
will say that a nonsquare M × N matrix that has a row rank less than N is also
singular.
Physically, when we refer to a singular Jacobian, what this means is that the
arm is in a pose where it cannot produce a Cartesian tooltip velocity in some

2
direction. Recall that the standard (read: bad) way of calculating an inverse
that is taught in linear algebra courses involves dividing by the matrix determi-
nant. A singular matrix has a determinant of 0, so attempting to calculate the
inverse of a singular matrix is the linear algebra equivalent of a divide–by–zero.
Consequently, if you blindly stuff a singular Jacobian into any sort of nu-
merical inversion routine, in the best case you will end up with infinite values
for at least some of the matrix entries (assuming your code doesn’t just blow
up!), and if you then multiply it by a desired Cartesian tooltip velocity vector,
you may end up with infinitely large joint rates.

4 Manipulability and Singularities


The manipulability of a manipulator is (usually) a function of its pose, and refers
to how close the Jacobian is to becoming reduced–rank. A manipulator is in a
singular pose when the Jacobian at that pose does, in fact, become reduced–
rank. There are several ways of calculating the manipulability of a robotic arm.
One way is to examine the determinant of the Jacobian; there are at least three
or four other widely used techniques.
Singular poses have the following characteristics:
• mobility of the manipulator is reduced
• infinite numbers of solutions to the inverse kinematics may exist

• small desired end effector velocities may result in very large joint velocities
• small joint torques may cause very large tool tip forces
There are two types of singularities: boundary singularities and internal
singularities. Boundary singularities are poses where the tooltip lies on the
boundary of the workspace. Remember our diagram from last lecture: except
here the arm is full stretched out so that the end effector lies exactly on the
outer workspace boundary. This arm is in a singular pose, and cannot pro-
duce a Cartesian velocity along the direction orthogonal to the normal of the
workspace (either inward or outward). All robotic arms have singular poses at
their workspace boundaries.
For many manipulators there are also poses that are not on a workspace
boundary where the manipulator cannot produce a velocity in some direction.
It’s important to avoid these situations. Some ways to locate such poses:
• Set the determinant to zero, |J(q)| = 0, and solve for q
– only works for 6–DOF arms (or other arms with square Jacobians)
– this is essentially an exhaustive search; there may be many such
poses, and you have to somehow find all of them.

3
y
5

3
l2
2
l1
1

0
−5 −4 −3 −2 −1 0 1 2 3 4 5 x
−1 if l < l
2 1

−2

−3

−4

−5

• Examine the rank degeneracy of J. If we express the Jacobian as


 
| | | | |
J =  c1 c2 c3 · · · cN 
| | | | |

then J becomes deficient when the c vectors become linearly dependent.


Note that for an arm with a spherical wrist, this can be split into two
subproblems: Jtran (θarm ) and Jrot (θwrist ).

4.1 Example: two axis planar arm


We’re going to use the simplest J, which we derived a couple of lectures ago, in
order to make life easier on ourselves. Jacobians expressed in different frames all
have exactly the same singularities, so we are free to use the Jacobian expressed
in any frame we like.  
3 l1 s2 0
J=
l1 c2 + l2 l2
where M = 2 (x, y) and N = 2 (θ1 , θ2 ).
Setting the determinant to zero:

|J| = l1 l2 s2 = 0 ⇒ θ2 = 0◦ or 180◦

4
y
5

3
θ2

θ1
0
0 1 2 x

Figure 1: Two–link planar manipulator

in which case,  
3 0 0
J=
l1 + l2 l2
and the two columns are not linearly dependent.

4.1.1 Joint velocities near the singularity


 1 
0
q̇ = 3 J −1 3 x ⇒ 3 J −1 = 1
l1 s2
c2 1
−( l1 s2 + l2 c2 ) l2
so
1
θ˙1 = ẋ
l1 s2
 
1 c2
θ˙2 = − + ẋ
l1 s2 l2 s2

which implies that the arm cannot move in the y direction, and an attempt to
set ẋ will result in the denominators of the fractions becoming zero, hence the
joint rates will become infinite.
NOTE, however that even if θ2 is not exactly 0◦ or 180◦ but is merely
close to one of these values, the denominators of the expressions will be quite
large. Hence, the singularity is “felt” for some distance away; exactly how far
depends on the maximum joint rates that are acceptable and/or achievable.

4.1.2 End Effector forces near the singularity


3
f = 3 J −T τ

5
so
 
1 1 c2
fx = τ1 − + τ2
l1 s2 l1 s2 l2 s2
1
fy = τ2
l2
which implies that fx → ∞ near the singularity.

4.2 Example: PUMA RRR Wrist


The D–H table for the PUMA robot wrist is
i αi−1 ai−1 di θi
4 −90◦ α3 d4 θ4
5 +90◦ 0 0 θ5
6 −90◦ 0 0 θ6

 
0 s4 −c4 s5
3
Jrot = 1 0 c5 
0 c4 s4 s5

| 3 J| = − sin θ5 ⇒ θ5 = 0◦ , ±180◦
At the singularity,  
0 s4 0
3
Jrot = 1 0 1 
0 c4 0
so the first and third columns are linearly dependent. At the singularity, the
first roll axis is aligned with the third roll axis, and essentially they have become
redundant.
This is an important point: interior singularities occur at poses where degrees
of freedom that are normally independent become redundant.

4.3 Pseudo–inverses
OK, so if taking a direct inverse of the Jacobian is a bad idea, what else can
we do? A better idea is to use a pseudo–inverse. Contrary to popular opinion,
there are in general an infinite number of pseudo–inverses for a matrix, but the
one everyone knows about is the Moore–Penrose pseudo–inverse:
J † ≜ J T (JJ T ) −1
| {z }
square!

You can always take a Moore–Penrose pseudo–inverse, even for a nonsquare


Jacobian. In addition, the Moore–Penrose pseudo–inverse has the nice property
that it minimizes the 2–norm:
∥J † (q)ẋ∥22 = min ∥q̇∥22 s.t. J(q)q̇ = ẋ

6
What this means is that of all the vectors of joint rates that lie in the nullspace of
the Jacobian, the Moore–Penrose pseudo–inverse finds the one that is smallest,
e.g. the joints rates are the lowest.
To tie this into our aside above: any given pseudo–inverse will choose exactly
one joint rate vector q̇ 2 that satisfies ẋ = J(q)q̇ 2 . But if the Jacobian has a
nontrivial nullspace N (J), we can find an infinite number of vectors q̇ 1 such
that ẋ = J(q)(q̇ 2 + q̇ 1 ). The Moore–Penrose pseudo–inverse inherently chooses
q̇ 1 such that ∥q̇ 1 + q̇ 2 ∥22 is the smallest it can be.
There are more clever ways to pick q̇ 1 . You could, for instance, choose a q̇ 1
that moves the elbow of your arm away from a potential collision hazard or that
improved its manipulability.
One easy way to improve the robustness of any technique that uses a Moore–
Penrose pseudo–inverse is to add what is called a damped least–squares term:

J † ≜ J T (JJ T + γI)−1

where γ ≪ 0. Equivalently, this adds a small positive value to the diagonal of


the square matrix that is being inverted. The effect of this is to limit joint rates
in the vicinity of a singularity, so it adds singularity robustness. The tradeoff
for this is that the trajectory the arm takes will deviate somewhat from the
(impossible to follow) trajectory that you specified.
A different, more mathematically justifiable, way to robustly take a pseudo–
inverse is based on the singular–value decomposition (SVD). SVDs are beyond
the scope of this course, but if you already know about SVDs, the idea is that
the “singular values” of a matrix, which are sort of like eigenvalues, can be
used to directly compute the matrix inverse. A near–singular matrix will have
a number of singular values that are very small. You can simply drop these
singular values and only use the ones that are above some threshold to calculate
a pseudo–inverse.

4.4 Problems with pseudo–inverses


First, it’s important to bear in mind that a manipulator singularity is a me-
chanical phenomenon. Better math won’t let us get rid of singularities. So even
if you use a Moore–Penrose pseudo–inverse, if the arm is in a singular pose, you
still won’t be able to move it along its singular direction. Your best bet is to
avoid singular poses somehow, normally by examining the rank of the Jacobian
— or some other manipulability index — along the trajectory you want and if
it starts to become singular, modify the trajectory.
A more subtle problem stems from the fact that we’re operating in joint
velocity space. In order to get back to joint angle space, i.e. to have something
useful to send to whatever control laws are controlling the actuators: we have
to perform an integration.
Numerical integrators are susceptible to drift. Simple ones that are suitable
for realtime use are especially so. If you do something like

q i+1 = q i + J † (q i )ẋ

7
i.e. use a straight forward Euler integration scheme, you will get trajectories
that look like this:

desired

actual

There are two common ways to solve this.

4.4.1 Iterated Jacobian method


One is called the “iterated Jacobian method”. The idea is that you do a Newton–
Raphson iteration on position.

x = f (q)
x0 = f (q0 )

Take a Taylor series about q 0 :

∂f
f (q 0 + ∆q) = f (q 0 ) + ∆θ + O(∆q 2 )
∂q q=q
0

q = q 0 + J (q 0 ) [x − x0 ] for N = M (1)
| {z }
∆x

Note that ∆x must be small for this to converge.


OK, what is x here?  
p
x= 
?
p is straightforward: It’s your Cartesian position vector. But what is in the
rotation spot?
It depends on how you have chosen to represent orientation, but it must be
a three–parameter representation. If you are using Euler or fixed angles,
it can be simply those. But if it’s a quaternion, you have a bit of a problem, be-
cause quaternions have four parameters. Remember, though, that quaternions
are defined as  
ψ
q=
∼ n̂

8
where ψ is a scalar and n̂ is a 3–vector. In a small angle situation, ψ is close
to unity and n̂ is close to zero, and exactly zero if the angle is zero. In fact, for
small angles, n̂ approximates 1/2 time the 3–2–1 Euler angles. In general we
use n̂ as the rotation part.
So what’s ∆x? Again, for the positional part it’s straightforward: it’s p − p0 .
And again, for rotation... it’s complicated.
If you are using an Euler angle or fixed angle set, you can subtract them.
This hand–waves over a bunch of math but as a small angle approximation it
holds, and only extreme pedants (or possibly those cramming fo their doctoral
T
quals) need go any further. So if you are using the [α, β, γ] set:
 
p
 α − α0 
x=  β − β0 

γ − γ0

If you are using a quaternion representation, you really shouldn’t just sub-
tract n̂ from n̂0 . The proper way to calculate the difference between two quater-
nions q and q is:
∼ ∼0
q 0 ∗ ∆q = q ⇒ ∆q = q ∗ q −1
0
∼ ∼ ∼ ∼ ∼ ∼

and it turns out that the quaternion inverse is


 
−1 ψ
q =
∼ −n̂

and quaternion multiplication is defined as


     
ψ1 ψ2 ψ1 ψ2 − n̂1 · n̂2
× =
−n̂1 −n̂2 ψ1 n̂2 + ψ2 n̂1 + n̂1 × n̂2

So the algorithm is then:

1 q 0 = q and x0 = f (q)
2 ∆x = x“-”x0
3 J = J(q)

4 ∆q = J † ∆x

5 q = q 0 + ∆q
6 ∥∆q∥ < ϵ? If yes, stop
7 go to 1

9
4.4.2 Direct Error Correction
Another way to solve the integral drift problem is simply to build an error
correction term into the RMRC law, e.g. to implement

q̇ = J † (q) (ẋ + γ∆x)

where γ is a gain that has to be selected for good performance, and everything
else is defined as we did above.

4.4.3 Constrained quadratic optimization


It turns out that we can derive a straightforward quadratic optimization problem
in the spirit of RMRC:

min ∥ẋ − J(q)q̇∥2



T 
= min ẋ − J(q)q̇ ẋ − J(q)q̇

= min ẋT ẋ − 2ẋT J(q)q̇ + q̇ T J T (q)J(q)q̇


= min −2ẋT J(q)q̇ + q̇ T J T (q)J(q)q̇


This is in a form for which there are many good, realtime–capable quadratic
optimization solvers. unlike the two previous techniques, though, in this form
it’s easy to add constraints, e.g.

min −2q̇ T J(q) + q̇ T J T (q)J(q)q̇


s.t.
−θ1,min ≤ θ1 ≤ θ1,max
−θ2,min ≤ θ2 ≤ θ2,max
..
.

and with a little cleverness you can even add constraints that depend on q,
not q̇, such as collision constraints. You can also correct for integrator drift by
solving

min ∥ẋ − J(q)q̇∥2 + γ∥q d − q + ∆tq̇ ∥2




Thinking along these lines eventually leads you to a technique called model–
predictive control, which if you’ve ever seen YouTube videos of robots doing
parkour, you’ve seen in action. It is a very powerful technique for robotic
motion control.

10

You might also like