Determining Big O() runtime on execution of a matrix
Moderators: phlip, Moderators General, Prelates
Determining Big O() runtime on execution of a matrix
So I've searched forums Big O notation references online and haven't found a clear answer to this. I've debated with friends in computer science and software engineering over this question and while I feel I am right about what the runtime is of this, I'm not certain. This question arose based on an interview question I had a week ago and had a different (though related) disagreement over the runtime of my algorithm.
Also, if this particular question has appeared somewhere and this thread is unnecessary, please link me because I've searched and haven't found anything.
Let's assume we have an m*m graph. In the example below, it's a 4x4 matrix, so there are 16 cells.
m

    

    
 m
    

    

m * m where m = 4, so 16 cells
Now let's assume we have some algorithm operating as shown below. Assume that performOperationOn(matrix[i][j]); is O(1).
for(i = 0; i < m; i++)
for(j = 0; j < m; j++)
performOperationOn(matrix[i][j]);
The debate is whether the runtime of this algorithm is O(n) or O(n^2).
Argument for O(n)
You loop over m, then for every loop iteration, you have an inner loop looping over m. This is m^2 or O(n^2).
Argument for O(n^2)
While you do have a double loop, after the algorithm executes fully, every single item of the input has been operated upon only once each. Since the input n is what's used to determine runtime (as opposed to one dimension of the input matrix, m), this is O(n). Note that since the input is defined to be a perfect square, the input size does increase as m^2 (grows quadratically? Is that the right word?), but the runtime still increases linearly with the increase of input size.
So which argument is correct? Or which answer is correct with a faulty argument, if that's the case?
Also, if this particular question has appeared somewhere and this thread is unnecessary, please link me because I've searched and haven't found anything.
Let's assume we have an m*m graph. In the example below, it's a 4x4 matrix, so there are 16 cells.
m

    

    
 m
    

    

m * m where m = 4, so 16 cells
Now let's assume we have some algorithm operating as shown below. Assume that performOperationOn(matrix[i][j]); is O(1).
for(i = 0; i < m; i++)
for(j = 0; j < m; j++)
performOperationOn(matrix[i][j]);
The debate is whether the runtime of this algorithm is O(n) or O(n^2).
Argument for O(n)
You loop over m, then for every loop iteration, you have an inner loop looping over m. This is m^2 or O(n^2).
Argument for O(n^2)
While you do have a double loop, after the algorithm executes fully, every single item of the input has been operated upon only once each. Since the input n is what's used to determine runtime (as opposed to one dimension of the input matrix, m), this is O(n). Note that since the input is defined to be a perfect square, the input size does increase as m^2 (grows quadratically? Is that the right word?), but the runtime still increases linearly with the increase of input size.
So which argument is correct? Or which answer is correct with a faulty argument, if that's the case?
 phlip
 Restorer of Worlds
 Posts: 7572
 Joined: Sat Sep 23, 2006 3:56 am UTC
 Location: Australia
 Contact:
Re: Determining Big O() runtime on execution of a matrix
It depends on what you call "n". It's O(n^{2}) in the width or height of the matrix, but O(n) in the number of elements in the matrix. Since the number of elements in the matrix is the square of the width or height, it should be clear that both of these are the same.
Usually, the complexity is given in terms of "the size of the input"... which in this case, would be the number of values in the matrix. So you'd call it O(n) in terms of the size of the input.
Usually, the complexity is given in terms of "the size of the input"... which in this case, would be the number of values in the matrix. So you'd call it O(n) in terms of the size of the input.
Code: Select all
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
Re: Determining Big O() runtime on execution of a matrix
Big O notation can lie for one, so it isn't the end all answer.
My 2 cents on this matter is that its a O(n), since this is a matrix. Yes, it is a double loop, but that is part of how the matrix logically setup for human reading. If you set up your private data to be m_data[n*m] , you can just do a single loop.
My 2 cents on this matter is that its a O(n), since this is a matrix. Yes, it is a double loop, but that is part of how the matrix logically setup for human reading. If you set up your private data to be m_data[n*m] , you can just do a single loop.
Yakk wrote:Computer Science is to Programming as Materials Physics is to Structural Engineering.

 Posts: 167
 Joined: Tue Jan 20, 2009 12:02 pm UTC
 Location: trapped in a profile factory please send help
Re: Determining Big O() runtime on execution of a matrix
_Axle_ wrote:My 2 cents on this matter is that its a O(n), since this is a matrix. Yes, it is a double loop, but that is part of how the matrix logically setup for human reading. If you set up your private data to be m_data[n*m] , you can just do a single loop.
It doesn't matter whether there's a single loop or a double loop, a 1D array or a 2D array. You'll still reach a given element in n*m steps. Think about it.
Re: Determining Big O() runtime on execution of a matrix
phlip wrote:It depends on what you call "n". It's O(n^{2}) in the width or height of the matrix, but O(n) in the number of elements in the matrix. Since the number of elements in the matrix is the square of the width or height, it should be clear that both of these are the same.
Usually, the complexity is given in terms of "the size of the input"... which in this case, would be the number of values in the matrix. So you'd call it O(n) in terms of the size of the input.
This.
"There are only two hard problems in computer science: cache coherence, naming things, and offbyone errors." (Phil Karlton and Leon Bambrick)
coding and xkcd combined
(Julian/Julian's)
coding and xkcd combined
(Julian/Julian's)
Re: Determining Big O() runtime on execution of a matrix
Dolus wrote:Let's assume we have an m*m graph. In the example below, it's a 4x4 matrix, so there are 16 cells.
...
m * m where m = 4, so 16 cells
clearly, the runtime is O(m*m) because m is the problem size you encounterd. it is correctly to say, it is O(n) in respect to the elements of the matrix, every element is visited once. but what is n? you say it comes from a m*m graph.
image, i have a O(2^m) algorithm and then define n = 2^m...is it really O(n) then?
in the usual case, where you have a graph with N nodes and build a matrix from it, the double loop gives you the runtime as O(N^2). nothing to argue about.
if you somehow have special case, with M matrix elements, your matrix size would be sqrt(M) * sqrt(M). then O(N^2) = O(sqrt(M)^2) = O(M) is like rearranging the matrix to a linear array.

 Posts: 32
 Joined: Mon Oct 01, 2007 4:17 am UTC
Re: Determining Big O() runtime on execution of a matrix
For matrix algorithms, the way the size of the input is characterized is typically the maximum of width and height, rather than the total number of elements.
Re: Determining Big O() runtime on execution of a matrix
skelterjohn wrote:For matrix algorithms, the way the size of the input is characterized is typically the maximum of width and height, rather than the total number of elements.
this is totally correct. the question of the poster was the Big O() notation of this case when using a nested for loop on all the elements.
ask your teachers/professors. if any of them doesn't say N^2, then revoke their license!
 Meteorswarm
 Posts: 979
 Joined: Sun Dec 27, 2009 12:28 am UTC
 Location: Ithaca, NY
Re: Determining Big O() runtime on execution of a matrix
k2daevin wrote:skelterjohn wrote:For matrix algorithms, the way the size of the input is characterized is typically the maximum of width and height, rather than the total number of elements.
this is totally correct. the question of the poster was the Big O() notation of this case when using a nested for loop on all the elements.
It's not totally correct. Some algorithms are characterized as a function of both width and height, since they are not always symmetric, i.e., describing an mxn matrix. For example, reducing a matrix to bidiagonal where m > n is O(mn^{2}). While this is O(n^{3}), describing it as a function of m and n is more descriptive.
The same as the old Meteorswarm, now with fewer posts!
Re: Determining Big O() runtime on execution of a matrix
k2daevin wrote:skelterjohn wrote:For matrix algorithms, the way the size of the input is characterized is typically the maximum of width and height, rather than the total number of elements.
this is totally correct. the question of the poster was the Big O() notation of this case when using a nested for loop on all the elements.
ask your teachers/professors. if any of them doesn't say N^2, then revoke their license!
It would be O(n^2) only if the nested for loops each operated across N, where N = m^2, or the number of items in the matrix. In this case, the for loops work together to traverse the entire matrix, operating on each item only once. In English this would be like, "For each item N, perform this operation N times" O(n^2) as compared to, "For each item N, perform this operation once" O(n).
OP answered his own question here: "the runtime still increases linearly with the increase of input size."
Linear means O(n).
Re: Determining Big O() runtime on execution of a matrix
Dolus wrote:the input size does increase as m^2 (grows quadratically? Is that the right word?)
Actually, it grows linearly.
SippyCup wrote:OP answered his own question here: "the runtime still increases linearly with the increase of input size."
Linear means O(n).
Linear means O(n), but increasing linearly means O(n^2). The rate of increase refers to the derivative, not the function itself. I think the confusion stems from the fact that exponential functions are said to "increase exponentially". This doesn't refer to the fact that they are exponential functions, it refers to the fact that the derivative of an exponential function is an exponential function.
Consider this counterexample to the argument that "increasing in x fashion == x function": does a constant function increase constantly? Nope, it doesn't increase at all, which is exactly what its derivative tells you.
But I digress.
The size of the input to a matrix algorithm is not defined as the number of elements in the matrix  because if a matrix has x elements, a matrix containing (x + 1) elements is not necessarily a valid input. The data size is instead defined by the dimensions of the matrix. Hence the time complexity of a matrix algorithm must also be defined in terms of the dimensions of the matrix. If you want to visit every element of the matrix, you have to iterate over each row for each column. For an (m * n) matrix, that's O(mn).
There are some operations you can perform on a matrix which don't take quadratic time: for example, elementary row operations. Swapping two rows is a constanttime operation; you just reassign the pointers. Multiplying a row by a constant, on the other hand, takes time linear in the size of the column.
If visiting every element of a matrix is linear because it only visits each (i, j) entry once, what does that imply about row multiplication? If you define the data size, x, as being the number of elements in the matrix, then in the case of a square matrix, you are claiming that row multiplication is order root x. But row multiplication is equivalent to manipulating every element of an array which does not represent a matrix. That's clearly linear in the size of the array.
Re: Determining Big O() runtime on execution of a matrix
You have to define your model of computation for each problem so you can define the complexity with respect to however it fits in your model of computation.
Re: Determining Big O() runtime on execution of a matrix
Sana wrote:Linear means O(n), but increasing linearly means O(n^2). The rate of increase refers to the derivative, not the function itself.
I think this is an issue of semantics, but I take your meaning. The value of runtime is a linear function of input sizeit increases along a line. I did not mean to imply anything about the rate of change.
Re: Determining Big O() runtime on execution of a matrix
If the operation is O(1) and the algorithm is as given, the order is O(m^2). It's pointless to argue about Big O notation without saying what the variables represent. If n = m, then O(n^2). If n = items in matrix, then O(m).
 Cleverbeans
 Posts: 1378
 Joined: Wed Mar 26, 2008 1:16 pm UTC
Re: Determining Big O() runtime on execution of a matrix
I think the only issue here seems to be the use of the term "matrix" to describe the data structure, and that the data structure is known to only comes in certain sizes. If we called this an array, or a tree, then it seems clear that it's O(n). The mistake made by taking the the number of rows to be the size of the input is that the definition of asymptotic equivalence only allows the comparison of functions over the same domain.
Lets try to represent the problem using rows as inputs. Clearly we're going to traverse each row, and return a new row that has the operation performed on it. Calling our row r and viewing the transformation on the rows as a function f(r):R^m > R^m then the complexity of the problem is O(m*f(r)). This is where the trouble is lurking, because if we naively attempt to compare O((m+1)*f(r)) and O(m*f(r)) we're tacitly done something undefined by comparing functions over different domains. To make the comparison meaningful you must found the argument on some function that only accepts fixedsized input. The only function provided that fits this description in the original problem is the PerformOperationOn function. I expect the confusion stems from the overloading of n when describing both the dimensions of a square matrix and the computational complexity of algorithms.
Lets try to represent the problem using rows as inputs. Clearly we're going to traverse each row, and return a new row that has the operation performed on it. Calling our row r and viewing the transformation on the rows as a function f(r):R^m > R^m then the complexity of the problem is O(m*f(r)). This is where the trouble is lurking, because if we naively attempt to compare O((m+1)*f(r)) and O(m*f(r)) we're tacitly done something undefined by comparing functions over different domains. To make the comparison meaningful you must found the argument on some function that only accepts fixedsized input. The only function provided that fits this description in the original problem is the PerformOperationOn function. I expect the confusion stems from the overloading of n when describing both the dimensions of a square matrix and the computational complexity of algorithms.
"Labor is prior to, and independent of, capital. Capital is only the fruit of labor, and could never have existed if labor had not first existed. Labor is the superior of capital, and deserves much the higher consideration."  Abraham Lincoln
Who is online
Users browsing this forum: Exabot [Bot] and 4 guests