2.2k views

Two matrices $M_1$ and $M_2$ are to be stored in arrays $A$ and $B$ respectively. Each array can be stored either in row-major or column-major order in contiguous memory locations. The time complexity of an algorithm to compute $M_1 \times M_2$ will be

1. best if $A$ is in row-major, and $B$ is in column-major order

2. best if both are in row-major order

3. best if both are in column-major order

4. independent of the storage scheme

D is correct

Here time complexity is asked, for each access of array element it will be constant,

So the time complexity will not depend upon storage. If at all program execution time is asked

a is true

edited by
+3

The complexity remains the same independent of storage scheme.

The main difference between row-major and column-major order is memory access patterns - for example if you iterate by column then row-major order would be jumping around memory, which is bad for CPUs because they cannot read-ahead/cache memory.

This is because transferring data from RAM to the CPU on modern processors is relatively very slow, compared with the speed of the processors. Thus most of the time would be spent on waiting for the memory to arrive.

To solve this problem, memory accesses are answered in in bunches, with a lot of extra data coming to the CPU's caches. This makes it important to use "good" memory access patterns, so that the machine can try and predict what you will use next, and therefore cache it, and thus eliminate the transfer-wait-times as much as possible. But if you jump around memory instead of going in order, the machine will potentially have to wait for each access.

The wait time is bounded by a constant however, so the complexity remains the same. And of course, doing it in the correct order does not decrease the complexity.

https://cs.stackexchange.com/questions/17865/does-the-performance-of-matrix-multiplication-depend-on-the-storage-of-the-array

0
program execution time is nothing but time complexity . Am i wrong?
0
Time complexity is not the exact program execution time, it is an estimated time which is independent of many things- machine, processor's speed, etc.

Like that, we assume that access to any element takes constant time(O(1)) but internally it will be different for different machines.

Running time of an algorithm is always independent of the storage scheme. While computing the running time of an algorithm we assume that to access any element time taken is same. So Answer is D.

But if the question asked best time complexity in which of the following implementation (not algorithm) then Option a is correct.

edited
0
wht is official key answer for this question?
0
but Time complexity of an algorithm  depends on hardware resources right? for eg. multiplication of two $n$ bit numbers take $O(n)$ time.
0
@reena while measuring the time complexity of an algorithm we do the asymptotic analysis, where we do not bother about the hardware on which the program is running.

But obviously hardware effect the overall time is taken to execute a program.

ans a)

Matrix multiplication takes the rows of M2 and columns of M2 in order. So, if the array A is stored in row-major and array B is stored in column-major, when the first element is accessed, the neighbouring elements (which will be immediately needed) will also be brought to cache. So, this storage scheme is best for matrix multiplication in terms of execution time.

+11

The question asks about "Time Complexity". In any cases O(n3) would be the TC. Had it been asked about "execution time in terms of memory access time" then option a. would be appropriate.

0
did you mean to say rows of M1?

and i think you have given answer for the question asking the about the program execution and the time complexity
0

An O(n3) algorithm will still be an O(n3)algorithm (assuming you are using the straightforward matrix-multiply). However, by taking into account the memory hierarchy of the system, you can significantly speed up that O(n3) algorithm.

+1 vote
The question asks about time complexity, not time taken by the program and for time complexity, it doesn't matter how we store array elements or which data structure we use, we always need to access same number of elements of M1 and M2 to multiply the matrices. It is always constant or O(1) time to do element access, thus the constants may differ for different schemes, but not the time complexity.

1
2