Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations cowski on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Matrix inversion and eigenvalue time depends on row ordering?

Status
Not open for further replies.

electricpete

Electrical
May 4, 2001
16,774
In the book "Rotating Equipment Vibration" by Mauric Adams, he gives an algorithm for generating M, K, C matrices for a rotor which is very straightforward (each rotor station gets 4 variables) until the point that he adds a mass below a bearing support. To do this he inserts rows into the middle of the matrix which complicates the relationship between the indeces and the physical problem significantly, but preserves the pattern that the non-diagonal elements are concentrated along the diagonal and elements adjacent to the diagonal. From the discussion I gather he did this intentionally because he believes it aids efficiency of the computer algorithm for eigenvalues although he doesn't mention Matlab.

Does anyone know if this is true for Matlab? For example is it easier/faster to compute eigenvalue for matrix with nonzero values concentrated along the diagonal:
X X 0 0 0 0
0 X X 0 0 0
0 0 X X 0 0
0 0 0 X X 0
0 0 0 0 X X

... (is it easier...)... than if we have the same matrix with two rows swapped (for more logical indexing) such as:
X X 0 0 0 0
0 0 X X 0 0
0 0 0 X X 0
0 0 0 0 X X
0 X X 0 0 0

Note the actual matrix could be somewhere in the range 50x50 or 100x100.

=====================================
Eng-tips forums: The best place on the web for engineering discussions.
 
Replies continue below

Recommended for you


It's probably got more to do with storage efficiency than number of computations as such, but the first step in solving FEA models used to be to reorder the rows to get a near diagonal matrix, in fact that was my first ever piece of production code.

My guess is that people have put a lot of effort over the years into optimising the code for near diagonal matrices, so even if there is no intrinsic advantage there is still a practical one.

Cheers

Greg Locock

SIG:please see FAQ731-376 for tips on how to make the best use of Eng-Tips.
 
Oh, I was wrong, the total cpu time was related to the rms of the average bandwidth, so there is a computational advantage in using a near diagonal matrix.

OTOH, cpu time is cheap these days and it sounds like your matrix is small, if a non diagonal matrix is easier to debug I'd use that.

Cheers

Greg Locock

SIG:please see FAQ731-376 for tips on how to make the best use of Eng-Tips.
 
Thanks. I guess he wasn't off his rocker after all.

As you say, in this case might as well program it the way that seems most straightforward without worrying about efficiency.

btw, my example X's and O's tried to make it simple but wasn't completely right... the extra pieces are inserted as boths rows/columns in the middle of the matrix. But I think you got the idea anyway.

=====================================
Eng-tips forums: The best place on the web for engineering discussions.
 
Try posting this question on the com.soft-sys.matlab USENET newsgroup. There are some seriously good linear algebra gurus there, including the chap who wrote the sparse matrix solvers.

- Steve
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor