n-dimensional rotation matrix

For the discussion of math. Duh.

Moderators: gmalivuk, Prelates, Moderators General

n-dimensional rotation matrix

Postby quintopia » Sun Oct 26, 2008 3:34 am UTC

How do I construct a rotation from an axis and an angle? I've learned Euler's formula for three dimensions but I'm blanking on the high dimension extension.
User avatar
quintopia
 
Posts: 2790
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: n-dimensional rotation matrix

Postby skeptical scientist » Sun Oct 26, 2008 4:23 am UTC

What's your data in the higher dimensional analogue? In 3 dimensions, an (oriented) axis and an angle defines a rotation, but in n-dimensions it does not. Either you need an axis (1-dimensional subspace) and something more than an angle to define what your rotation does on the orthogonal complement (which will no longer be a plane), or else you need an n-2 dimensional subspace (to be fixed by the rotation) and an angle.

In either case, you have a fixed subspace plus a linear transformation on its orthogonal complement. Since rotations are linear, you can figure out what it does to an arbitrary vector by decomposing it as a sum of one vector in the fixed subspace, and one vector in the orthogonal complement.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson
User avatar
skeptical scientist
closed-minded spiritualist
 
Posts: 6152
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: n-dimensional rotation matrix

Postby quintopia » Sun Oct 26, 2008 5:07 am UTC

I got lost in those words, because I'm probably missing the intuition. Let me just put it this way. I have two normal vectors and want a rotation from one to the other.
User avatar
quintopia
 
Posts: 2790
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: n-dimensional rotation matrix

Postby Harg » Sun Oct 26, 2008 7:01 am UTC

I don't think it's clear what a rotation in n dimensions actually is. In any case, what you can do is to expand your two vectors to an orthonormal basis via Gram-Schimdt and then just switch their entries in the identity matrix while leaving the others alone. You'll need an unitary transformation to flick onto each end, but that shouldn't be a problem.
User avatar
Harg
 
Posts: 130
Joined: Thu Jul 10, 2008 8:24 pm UTC

Re: n-dimensional rotation matrix

Postby skeptical scientist » Sun Oct 26, 2008 7:03 am UTC

Harg, a rotation is a linear transformation of Rn that preserves lengths (and therefore distances), angles, and orientation. You can't find an orthonormal basis containing the two original vectors because they will not, in general, be orthogonal. If they are, switching their entries in the identity matrix and leaving the others alone would be orientation-reversing, which is not a rotation.

There is no unique rotation that takes one vector to another, because there are lots of rotations that will fix a given vector (except in R2).

What you can do is take the rotation that has the least action (in some sense). Let's call your original two vectors x and y, and assume both are unit vectors, and are linearly independent. I'm assuming that you know what linearly independent means, what orthogonal means, what a vector subspace is, what it means for vectors to span a vector subspace, and what an orthogonal complement is; let me know if I need to explain any of those.

The rotation that sends x to y and has the least action (measured appropriately) is the one that fixes the orthogonal complement to the plane spanned by x and y. Let e1...en-2 be a basis for this orthogonal complement, and let z = 2<x,y>y - x (where <,> is the scalar product).

Then the rotation we are looking for is the one that sends x to y, y to z, ei to ei for each i, and extends linearly to Rn. We can think about what this does to a vector w geometrically as follows: first, project w onto the plane spanned by x and y, and keep track of the difference w' between w and the projection of w. Now we rotate the projection of w in the plane spanned by x and y by the (unique) rotation that sends x to y. Finally, we "unproject" it by adding back w'.

This is the second option I discussed in my previous post, because it fixes an n-2 dimensional subspace (the orthogonal complement to the plane spanned by x and y) and rotates around that subspace by an angle equal to the angle between x and y.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson
User avatar
skeptical scientist
closed-minded spiritualist
 
Posts: 6152
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: n-dimensional rotation matrix

Postby Harg » Sun Oct 26, 2008 7:09 am UTC

Ah yes, good call. Serves me right for trying to be smart in the middle of the night.
User avatar
Harg
 
Posts: 130
Joined: Thu Jul 10, 2008 8:24 pm UTC

Re: n-dimensional rotation matrix

Postby MartianInvader » Sun Oct 26, 2008 6:48 pm UTC

To put it algorithmically, get an orthonormal basis for your vector space so that the first two basis vectors span the plane generated by your two normal vectors. Say your vectors are at angle \theta apart. Then write the 2-dimensional rotation-by-\theta matrix in the upper-left corner of your n-by-n matrix, put 1's down the rest of the diagonal, and zeros everywhere else.

But you could actually do anything at all with the bottom-right (n-2)x(n-2) submatrix, and you'd still be rotating one of your vectors to the other.
Let's have a fervent argument, mostly over semantics, where we all claim the burden of proof is on the other side!
User avatar
MartianInvader
 
Posts: 685
Joined: Sat Oct 27, 2007 5:51 pm UTC

Re: n-dimensional rotation matrix

Postby quintopia » Sun Oct 26, 2008 6:58 pm UTC

So, just to make sure I'm understanding:

If I have a set of points in the (n-1)-dimensional hyperplane plane through the origin given by x, and apply the rotation you have defined to them, their images will all be in the (n-1)-dimensional hyperplane through the origin defined by y?

Also, I gather that I can numerically compute the basis for the orthogonal complement as null([x,y]T), yes?
User avatar
quintopia
 
Posts: 2790
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: n-dimensional rotation matrix

Postby MartianInvader » Mon Oct 27, 2008 1:03 am UTC

quintopia wrote:So, just to make sure I'm understanding:

If I have a set of points in the (n-1)-dimensional hyperplane plane through the origin given by x, and apply the rotation you have defined to them, their images will all be in the (n-1)-dimensional hyperplane through the origin defined by y?

Yeah. You can see this by looking at the action on your appropriately-chosen orthonormal basis: The vectors orthogonal to both x and y don't change, so they stay orthogonal, and the vector orthogonal to x rotates through the plane spanned by x and y a distance equal to the angle between x and y, thus ends up orthogonal to y.

quintopia wrote:Also, I gather that I can numerically compute the basis for the orthogonal complement as null([x,y]T), yes?

hrmm... yes... yeah, that makes sense. Stuff orthogonal to x and y is exactly stuff that gives you zero when you take the dot product with x or y, which is what you're doing.

There is a moral to all of this, too, which is that an axis and an angle don't define a rotation in n dimensions the way they define one in 3 dimensions. Rather, a more appropriate way to think of the three dimensional case is to define your rotation by a two-dimensional subspace and an angle. A rotation in n dimensions is uniquely defined by a set of orthogonal two-dimensional subspaces and an angle for each one. So, for example, in even-numbered dimensions a rotation doesn't necessarily have an invariant axis.

I think. If I got any of that wrong, someone please correct me :wink:
Let's have a fervent argument, mostly over semantics, where we all claim the burden of proof is on the other side!
User avatar
MartianInvader
 
Posts: 685
Joined: Sat Oct 27, 2007 5:51 pm UTC

Re: n-dimensional rotation matrix

Postby quintopia » Mon Oct 27, 2008 2:40 am UTC

I think you got it wrong. What everyone else said is that the n-dimensional equivalent of an axis is whichever (n-2)-dimensional subspace which is fixed. 2 dimensions is not enough.

SO, how do I convert this into an algorithm I can actually implement? Here's my best guess based on what has been said so far:

-Find an orthonormal basis for span(x,y) (Which is just [x,(y-<x,y>x)/|y-<x,y>x|] since x is already unit length)
-Find an orthonormal basis for Rn with the basis for span(x,y) as its first two components (complete it using null([x,y]T)). Represent this basis as a matrix B.
-Find the angle theta between x and y (cos-1<x,y>)
-Replace the upper left corner of the identity matrix by the 2-D rotation matrix for angle theta ([cos theta, -sin theta; sin theta, cos theta]). Call this matrix R.
-The final rotation matrix is BRB-1.

Do I have this algorithm correct?
User avatar
quintopia
 
Posts: 2790
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: n-dimensional rotation matrix

Postby skeptical scientist » Mon Oct 27, 2008 2:52 am UTC

quintopia wrote:I think you got it wrong. What everyone else said is that the n-dimensional equivalent of an axis is whichever (n-2)-dimensional subspace which is fixed. 2 dimensions is not enough.

SO, how do I convert this into an algorithm I can actually implement? Here's my best guess based on what has been said so far:

-Find an orthonormal basis for span(x,y) (Which is just [x,(y-<x,y>x)/|y-<x,y>x|] since x is already unit length)
-Find an orthonormal basis for Rn with the basis for span(x,y) as its first two components (complete it using null([x,y]T)). Represent this basis as a matrix B.
-Find the angle theta between x and y (cos-1<x,y>)
-Replace the upper left corner of the identity matrix by the 2-D rotation matrix for angle theta ([cos theta, -sin theta; sin theta, cos theta]). Call this matrix R.
-The final rotation matrix is BRB-1.

Do I have this algorithm correct?

I think that will work. I'm not quite sure what you mean by null([x,y]T), or how you use it to complete your basis which started with [x,(y-<x,y>x)/|y-<x,y>x|], (I would just have used Gram-Schmidt on (x,y,e1,...,en), and thrown out the two 0 vectors which resulted, in order to get a basis) but assuming that part works, everything else will work.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson
User avatar
skeptical scientist
closed-minded spiritualist
 
Posts: 6152
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: n-dimensional rotation matrix

Postby quintopia » Mon Oct 27, 2008 3:34 am UTC

null is just a matlab function I can use that returns an orthonormal basis for the null space of a given matrix. I would probably do Gram-Schmidt if I didn't have such a function. In any case, by definition of the fact the the resulting vectors will be a basis of the null space, they have to be orthogonal to span(x,y) and therefore to each of x and y, yes?

Edit: yes. I tried it and got back negligible dot products across the board.

One more thing: Is there some way that calculates the sine of the angle between two vectors with higher accuracy than would sin(arccos(<x,y>)) given real-world floating point error for sin and arccos? I kinda need the error to be pretty small.
User avatar
quintopia
 
Posts: 2790
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: n-dimensional rotation matrix

Postby skeptical scientist » Mon Oct 27, 2008 7:58 am UTC

If you want an exact solution (in terms of radicals) you can either take the norm of the cross product or use the fact that sin(arccos(t))=sqrt(1-t2). I'm not sure how the floating point accuracy of those operations compares.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson
User avatar
skeptical scientist
closed-minded spiritualist
 
Posts: 6152
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: n-dimensional rotation matrix

Postby quintopia » Mon Oct 27, 2008 9:36 pm UTC

Okay, now I have a very similar question and I worked it out and want to make sure my algebra is correct.

I'm trying to find the intersection of two hyperplanes containing the origin, a and b.

By definition, this is all points x where <a,x>=<b,x>=0
which means <a,x>-<b,x>=0
which means <a-b,x>=0
and so null((a-b)T) is the solution set, yes?

I think I'm missing a constraint here, because it looks like the given solution is n-1-dimensional and it should be n-2-dimensional. Maybe the correct answer is again null([a,b]T). That would at least have the right dimension.


Alright, what is the best way to generate points uniformly in the intersection of this subspace and the unit ball? Let's say B is a basis for the subspace. Could I generate a random number from [-1,1] to multiply by each basis vector, and then normalize the result (to get a point in the intersection of the subspace and the unit hypersphere), and then multiply it by the square root of a random number in [0,1] to randomize the radius?

I don't think this will work. But it may work if I choose the random factors from a normal distribution. . .

Edit: strange. I tested it and it seems to work better (in 3D) if I don't square root the radius. . .
Edit again: nvm, it's supposed to be rand^(1/n) for the radius. Everything works now. . .
Last edited by quintopia on Mon Oct 27, 2008 10:27 pm UTC, edited 2 times in total.
User avatar
quintopia
 
Posts: 2790
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: n-dimensional rotation matrix

Postby NathanielJ » Mon Oct 27, 2008 10:15 pm UTC

quintopia wrote:By definition, this is all points x where <a,x>=<b,x>=0
which means <a,x>-<b,x>=0
which means <a-b,x>=0
and so null((a-b)T) is the solution set, yes?

I think I'm missing a constraint here, because it looks like the given solution is n-1-dimensional and it should be n-2-dimensional. Maybe the correct answer is again null([a,b]T). That would at least have the right dimension.



You're indeed missing a constraint, and the problem occurs when you go from the first line to the second line. Yes, you need <a,x> - <b,x> = 0, but that's not enough (for example, that constrant could hold if <a,x> = <b,x> = 3, which doesn't satisfy the original constraints).

You indeed want null([a,b]T), and it should be clear how this generalizes to an arbitrary number of vectors.
Homepage: http://www.njohnston.ca
Conway's Game of Life: http://www.conwaylife.com
User avatar
NathanielJ
 
Posts: 880
Joined: Sun Jan 13, 2008 9:04 pm UTC

Re: n-dimensional rotation matrix

Postby skeptical scientist » Tue Oct 28, 2008 1:56 am UTC

quintopia wrote:I'm trying to find the intersection of two hyperplanes containing the origin, a and b.

By definition, this is all points x where <a,x>=<b,x>=0

I'm not quite sure what you mean here. There are lots of hyperplanes containing any number of points, depending on dimension. In fact, both hyperplanes could be the plane spanned by a and b (which of course contains 0), in which case their intersection would definitely not be the set {x:<a,x>=<b,x>=0}.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson
User avatar
skeptical scientist
closed-minded spiritualist
 
Posts: 6152
Joined: Tue Nov 28, 2006 6:09 am UTC
Location: San Francisco

Re: n-dimensional rotation matrix

Postby quintopia » Tue Oct 28, 2008 2:09 am UTC

I meant the standard linear algebra definition of a hyperplane containing the origin: the n-1-dimensional space orthogonal to the given vector. But no worries now, I worked it all out.
User avatar
quintopia
 
Posts: 2790
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: n-dimensional rotation matrix

Postby inhahe » Sun Apr 25, 2010 4:45 am UTC

skeptical scientist wrote:
quintopia wrote:I think you got it wrong. What everyone else said is that the n-dimensional equivalent of an axis is whichever (n-2)-dimensional subspace which is fixed. 2 dimensions is not enough.

SO, how do I convert this into an algorithm I can actually implement? Here's my best guess based on what has been said so far:

-Find an orthonormal basis for span(x,y) (Which is just [x,(y-<x,y>x)/|y-<x,y>x|] since x is already unit length)
-Find an orthonormal basis for Rn with the basis for span(x,y) as its first two components (complete it using null([x,y]T)). Represent this basis as a matrix B.
-Find the angle theta between x and y (cos-1<x,y>)
-Replace the upper left corner of the identity matrix by the 2-D rotation matrix for angle theta ([cos theta, -sin theta; sin theta, cos theta]). Call this matrix R.
-The final rotation matrix is BRB-1.

Do I have this algorithm correct?

I think that will work. I'm not quite sure what you mean by null([x,y]T), or how you use it to complete your basis which started with [x,(y-<x,y>x)/|y-<x,y>x|], (I would just have used Gram-Schmidt on (x,y,e1,...,en), and thrown out the two 0 vectors which resulted, in order to get a basis) but assuming that part works, everything else will work.


Hi, i found this thread because i was googling for how to rotate something in n dimensions (really hard to find, strangely.. i guess because a) nobody cares about higher dimensions in proportion to how many people are writing computer games, and b) rotation is ill-defined in higher d's), figures i'd find the answer on xkcd.

but i don't understand any of this math. can somebody put this into a straightforward algorithm for me?
for example, i have no idea what <x,y> means or what span(x,y) does. (matrix multiplication, sine and cosine i can do.)

also, i was wondering.. i'm getting the idea that for any two vectors, in higher there's an unlimited number of ways to rotate between them. the idea of taking the 'shortest path' is probably good, but i'd also like to know how to parameterize the remaining degrees of freedom (without resorting to gram-schmidt or null([x,y]T) (what the hell does that even mean? what is T? and what is a null space? etc.) and then how to do the calculations based on them.

i also heard that in higher dimensions there's enough "room" for things to rotate at two different speeds at once. it went something like this:
[cos theta, sin theta, 0 0
-sin theta, cos theta, 0, 0
0, 0, cos theta, sin theta
0, 0, -sin theta, cos theta]
or something like that. i don't remember exactly and i can't find the website again.
the question is.. are such rotations distorted rotations? would it be a purer rotation, in some sense, to not rotate at two different speeds? or is it just another necessary degree of freedom that shouldn't be selectively ignored?

mainly though, of course, the question is, how to do all of this in plain algebraic/trigonometric terms.
thanks
inhahe
 
Posts: 59
Joined: Sun Feb 22, 2009 11:16 pm UTC

Re: n-dimensional rotation matrix

Postby antonfire » Sun Apr 25, 2010 8:12 am UTC

This is for the most part a restatement of skeptical scientist's second post, but hopefully in "linear algebra 101" terms.


Since computing the rest of the orthonormal basis doesn't affect the final answer, there should hopefully be no need to do it at all. Here's a modified version of the process where you don't need to do it.

Start with vectors v1, v2, and pick a pair of orthonormal vectors, q1, q2. that span the same space (this is precisely what Gram-Schmidt does). If you like, this is QR decomposition of the matrix whose columns are v1 and v2. q1 and q2 are the columns of Q.

Now, given a vector w, what do we need to do to rotate it around the "axis" perpendicular to our plane? We need to find its components in the direction of q1 and q2. These are, of course, the components of the vector QTw. Then the projection of w onto that plane is QQTw. The projection of w onto the "axis" is then the rest of it: w-QQTw = (I-QQT)w. This won't change when we rotate the thing.

Okay, now let's rotate. If U is the usual 2 by 2 rotation matrix by angle t, then the projection of a rotated w into our plane is QUQTw. Adding back in the component of w pointing perpendicular to the plane gives us that the rotated w is (I-QQT)w + QUQTw = (I - Q(I-U)QT)w.

At this point we can use trig to find the angle we need to rotate by, find U and be done. But wait, note that in the two dimensional case all the trig "cancels itself out" in the end. You don't actually need to evaluate any sines or cosines. Is this true in the n-dimensional case as well? Yes!

We did QR decomposition initially. That means v1 = r11q1 and v2=r12q1 + r22q2. So we know what rotation matrix to multiply by to get v1 to map to a multiple of v2. It is U = 1/sqrt(r122 + r222) [r12, -r22; r12, r22].

So the process is this: do QR decomposition (Gram-Schmidt) on the two vectors you started with. Compute U = 1/sqrt(r122 + r222) [r12, -r22; r12, r22]. The matrix (I - Q(I-U)QT) is your rotation. That's three square roots and no trig functions.


Two morals here:
(1) Don't throw away information. The R in QR decomposition turned out to be useful.
(2) Matrices don't have to be square.
Jerry Bona wrote:The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn's Lemma?
User avatar
antonfire
 
Posts: 1771
Joined: Thu Apr 05, 2007 7:31 pm UTC

Re: n-dimensional rotation matrix

Postby barnowl@silcom.com » Sat Feb 16, 2013 9:13 pm UTC

I have not carefully examined the above posts. However it may be useful to refer to my online homogeneous matrix approach titled "Homogeneous Transformation Matrices" (HTM). This requires some simple and standard preliminaries on how to represent a point in n dimensional space by a 1 x (n+1) row matrix, and a hyperplane (n-1 dimensional flat) by a (n+1) x 1 column matrix. Higher dimensional flats can then be represented by adding rows and forming the join, or adding columns and forming intersections.

An n-dimensional rotation is specified by: (1) a point-wise invariant n-2 dimensional flat and an angle of rotation about this flat, or (2) two oriented hyperplanes, h1 and h2, with the understanding that the rotation carries h1 to h2, with the positive "side" of h1 carried to the positive "side" of h2. This second method is based on the well known fact that any rotation can be effected by two reflections in hyperplanes (which intersect in the invariant flat of the resulting rotation). For both (1) and (2), explicit methods are given in HTM to construct a homogeneous matrix which effects the desired rotation by right multiplying the matrix by the homogeneous coordinates of any ordinary point. Directions ("ideal points") can also be used as parameters or operated on by the matrix.

I am defining an n-dimensional rotation as an isometry with an n-2 dimensional pointwise invariant "axis". However, I notice that some online sources define n-dimensional rotations as isometries with an invariant POINT. It appears these two definitions are equivalent, that is, that if an isometry has an invariant point, it must have a pointwise invariant n-2 dimensional flat. In three dimensions, this implies the composition of two successive rotations about a point must be another rotation about that point, with some axis line. This is well known, but not so easy to visualize. Perhaps a reader can clarify this issue - it bears on the very definition of an n-dimensional rotation.
barnowl@silcom.com
 
Posts: 1
Joined: Sat Feb 16, 2013 8:00 pm UTC

Re: n-dimensional rotation matrix

Postby f5r5e5d » Sun Feb 17, 2013 9:51 pm UTC

I like the introductory Geometric Algebra rotation examples, use the even # of reflections in a 2-plane, Rotors as normalized real Clifford multivectors with scalar and even k-blade parts, "sandwich" multiplication v'=RvR-1

how about Googling for Cookbook approaches: http://www.cs.indiana.edu/pub/hanson/Si ... gndrot.pdf looks more matrix oriented
f5r5e5d
 
Posts: 81
Joined: Tue May 08, 2012 3:22 am UTC


Return to Mathematics

Who is online

Users browsing this forum: No registered users and 6 guests