The Gateway to Computer Science Excellence
+11 votes
1.1k views

A cache aware sorting algorithm sorts an array of size 2with each key of size 4 Bytes. The size of the cache memory is 128 Bytes and algorithm is the combination of merge sort and insertion sort to exploit the locality of reference for the cache memory (i.e. will use insertion sort while problem size equals cache memory).

Best case and worst case running time of the algorithm respectively is:

A) 2k-5 [1+log22k-5], 2k-5 [25+log22k-5]

B) 2k-5 [1+log22k-5], 2[2k-5+log22k]

C) 2[1+log22k-5], 2k [25+log22k-5​​​​​​​]

D) 2[25+log22k-5], 2k [25+log22k​​​​​​​-5]

in Algorithms by Loyal (9.3k points)
edited by | 1.1k views
–1
Can anyone plz explain the question plz.... What im getting is

"If each word is 4-byte long, then a 128-byte cache contains 32 words. Therefore, if the problem size is 32 or less, the instance will be immediately solved by Insertion Sort, otherwise it will be split in two sub-problems with the same size, which will then have to be merged. "

But how to make a recurrence equation from this ?

1 Answer

+15 votes
Best answer
  • If we sort $\frac{n}{x}$  lists each of $x$ elements using indertion sort and then merge them using normal merge sort then totol overall complexity would be $\Theta\left \{ \frac{n}{x}.IN(x) + n.\log \left ( \frac{n}{x} \right ) \right \}$. Where $IN(x)$ is the time taken by the insertion sort to sort $x$ element list.
  • Here $x = 2^5 = \text{cache size}$.
  • So, putting insertion sort worst case $IN(x) = x^2$ we got the worst case of the new algorithm.
  • putting insertion sort best case $IN(x) = x$ we got the best case for this new algorithm

$$\begin{align*} &\text{Best complexity when insertion sort gives best}\\ &\Rightarrow IN(x) = x = 2^5 \\ &\Rightarrow \text{Best Conplexity} = \frac{n}{x}.IN(x)+ n.\log \left ( \frac{n}{x} \right ) \\ &\Rightarrow \text{Best Conplexity} = \frac{2^k}{2^5}.2^5 + 2^k.\log \left ( \frac{2^k}{2^5} \right ) \\ &\Rightarrow \text{Best Conplexity} = 2^k. \left ( 1 + \log \left ( 2^{k-5} \right ) \right ) \\ \\ &\text{Now , } \\ &\text{Worst complexity when insertion sort gives Worst}\\ &\Rightarrow IN(x) = x^2 = \left ( 2^5 \right )^2 \\ &\Rightarrow \text{Worst Conplexity} = \frac{n}{x}.IN(x)+ n.\log \left ( \frac{n}{x} \right ) \\ &\Rightarrow \text{Worst Conplexity} = \frac{2^k}{2^5}.\left ( 2^5 \right )^2 + 2^k.\log \left ( \frac{2^k}{2^5} \right ) \\ &\Rightarrow \text{Worst Conplexity} = 2^k. \left ( 2^5 + \log \left ( 2^{k-5} \right ) \right ) \\ \end{align*}$$


NOTE:

  • Merge sort in call cases will take (total_no_of_elemets)* $\log$ (no_of_small_problem)
by Veteran (57k points)
edited by
0
  • Here x=25=cache size.

Cache size is given as 27 B. Then why this?

0
in terms of no of elements
0
How would it change if it was QuickSort + Insertion sort?
0
you can do that experiment :)
0

From the question and solution, as much I can understand is that, we divide the array into smaller array of 128B each. Then appying insertion sort, and merging them. From the next level, we are applying merge sort till we get the array of size 2k. Am I right?

+2
yes. You please refer to CLRS solution manual, there they have shown more results. It is a question from chapter 2.
0
Thanks a lot!!
0
Can u pls share clrs solution manuval link plzz...?
+1
wow deka!
0

Could not understand the question itself !!

We are given an array ( say $A$)  rather  a second level memory whose size is $2^{k}$ (bits / bytes ?  i think bits [this size includes key and data($x$) key size is given as 4B $\Rightarrow x B*4B=2^{k} $ ]) The idea behind this algorithm is used to reduce the movement of pages  in and out of the cache memory .

  • Algorithms stops partitioning subarray when the size equals cache memory size (128B) 
    This is done using Merge Sort
  • We then sort the subarray using insertion sort ( but why ?)

I understood the question in this way. But when I read the solution i was confused with $\frac{n}{x}$ list of x elements sorted using insertion sort first.

 Please explain a bit on how you are getting $T(n) \ \textbf{is} \ O\left \{ (\frac{n}{x}) \ \textbf{IN} \ (x)+n \log(\frac{n}{x}) \right \}$

0

@Anjana

Insertion sort takes (k2) time per k-element list in the worst case. Therefore,

sorting n/k lists of k elements each takes (k2n/k) = (nk) worst-case time.

To achieve (n lg(n/k))-time merging, we merge the lists pairwise, then merge

the resulting lists pairwise, and so on, until thereís just one list.

Now, as the algorithm is combination of merge sort and insertion sort, the complexity is just an add of two complexities. rt?

0
@srestha so it wil be O(nk) + n log (n/k) ? But how come ( n/x ) IN(x) + n log (n/x)
0
@Dq...IT IS .... $\Theta\left \{ \left [ \frac{n}{x} \right ].IN(x) + n.\log \left [ \frac{n}{x} \right ] \right \}$
0

Can u pls explain how  [n/x].IN(x) ]  is coming ?

0

I guess you understood O(nk) ?? for (n/k) lists of k elements each in @ srestha 's comment

0
Yes.  has she mentioned only about Worst case senario of Insertion sort ? or She says the Contribution of Insertion sort to total complexity of the problem ?
0

so what is problem with ? [n/x].IN(x) ] ... you put $IN(x) = x^2$ in worst case.

0
I thought the total complexity of INSERTION sort done here is O(nk) . If she has calculated for only Worst case then it is fine :)  I was confused with that actually.
+1
i liked the explanation of Deka! so i wrote Wow deka! now people have downvoted it.. but why?
Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true
50,645 questions
56,615 answers
195,894 comments
102,332 users