Vectors and Matrix Determinants

For the discussion of math. Duh.

Moderators: gmalivuk, Moderators General, Prelates

User avatar
pogrmman
Posts: 329
Joined: Wed Jun 29, 2016 10:53 pm UTC
Location: Probably outside

Vectors and Matrix Determinants

Postby pogrmman » Tue Sep 27, 2016 10:45 pm UTC

So, matrices have always kind of puzzled me, but what puzzles me more has to do with the determinant of a matrix of vectors.

If you have a matrix composed of n-1 n-dimensional vectors and a row of the form [i,j,k,...] (i.e. the unit vectors along each of the axes), the determinant is a vector perpendicular to the ones you started with. (So, if you have the vector <1,2> and put it in a matrix with [i,j] as the other row, you'll get the vector 2i-j or <2,-1>, which is perpendicular to <1,2>. You could also get j-2i or <-2,1> which is also perpendicular.)

You've probably seen this in calculating the cross product. What I want to know is why this works. (It seems to work for n at least up to 6 -- I couldn't be bothered to take the determinant of a 7x7 matrix, but my guess is it will work)

Do any of you have a way to explain why this works?

Thanks!

EDIT: The position of the row [i,j,k,...] might matter -- I'm not sure -- but I'm not about to calculate the determinant of a 3x3 matrix without scratch paper... (It appears as if it works if this row is on top or on the bottom -- and 3x3 is the minimum to test somewhere that isn't the top or bottom...)

User avatar
cyanyoshi
Posts: 362
Joined: Thu Sep 23, 2010 3:30 am UTC

Re: Vectors and Matrix Determinants

Postby cyanyoshi » Wed Sep 28, 2016 1:26 am UTC

Let's try to find an n×1 vector vn so that v1Tvn = ... = vn-1Tvn = 0. I define V = [ v1 v2 ... vn-1] so that we have VTvn = [0 ... 0]T. Now let's make up a vector "w" and matrix "X" so that [V w] and [X vn] are square, and [V w]T[X vn] = In, where In is the n×n identity matrix. Expanding this out gives the system of equations VTX = In-1, VTvn = 0, wTX = 0, wTvn = 1. So far so good.

Now we can just worry about finding a matrix inverse, since we would have that [X vn] = [V w]-T. We can get a vn that works by looking at the last column of [V w]-T. It turns out there is a nifty way to calculate the inverse of a matrix using cofactors. What happens is that the ith element of vn is some constant times (-1)i times the determinant of (V with the ith row deleted). This is the same as the determinant of the matrix you described, times a constant!

Therefore a vector constructed by your method must be orthogonal to all the other vectors, since if I give you some random vector "w" that is linearly independent from {v1, ... , vn-1}, you can just look at the last column of the matrix [V w]-T to get an orthogonal vector that happens to be a constant times the vector you can construct by your method.

Demki
Posts: 193
Joined: Fri Nov 30, 2012 9:29 pm UTC

Re: Vectors and Matrix Determinants

Postby Demki » Wed Sep 28, 2016 7:11 am UTC

In addition to cyanoshi's post, I think these 2 videos might help you, at least for the case of 3d vectors:
https://youtu.be/eu6i7WJeinw
https://youtu.be/BaM7OCEm3G0

These 2 videos are part of a series about linear algebra.

User avatar
pogrmman
Posts: 329
Joined: Wed Jun 29, 2016 10:53 pm UTC
Location: Probably outside

Re: Vectors and Matrix Determinants

Postby pogrmman » Wed Sep 28, 2016 9:33 pm UTC

Thanks!


Return to “Mathematics”

Who is online

Users browsing this forum: No registered users and 15 guests