The Gateway to Computer Science Excellence
+22 votes
3.4k views

Consider two cache organizations. First one is $32$ $kB$ $2$-way set associative with $32$ $byte$ block size, the second is of same size but direct mapped. The size of an address is $32$ $bits$ in  both cases . A $2$-to-$1$ multiplexer has latency of $0.6 ns$ while a $k-$bit comparator has latency of  $\frac{k}{10} ns$. The hit latency of the set associative organization is $h_1$ while that of direct mapped is $h_2$.
The value of $h_2$ is:  

  1. $2.4$ $ns$
  2. $2.3$ $ns$
  3. $1.8$ $ns$
  4. $1.7$ $ns$
in CO and Architecture by Veteran (105k points)
edited by | 3.4k views

4 Answers

+15 votes
Best answer
$\text{number of sets}  = \dfrac{\text{cache size}}{\text{no. of blocks in a set } \times \text{ block size}}$

$=\dfrac{32KB}{1 \times 32B} = 1024$

So, number of index bits $= 10,$ and

number of tag bits $=32-10-5=17.$

So, $h2 =\dfrac{17}{10}= 1.7\text{ ns}$

Correct Answer: $D$
by Veteran (431k points)
edited by
0
sir if h3 is for full associative then what is the answer.
multiplexer latency  used.
+2
Then index bits = 0, tag bits = 32, h3 = 32/10 = 3.2 ns.
+3

@Arjun sir

In full associative , tag bits =32-5 

Also MUX will be used here also.

Right?

https://gateoverflow.in/62984/cache-hardware-organisation

0
@arjun sir
hit latency of direct mapped cache= hit latency (multiplexer) +hit latency of k-bit comparator  ??
0
But in general, for Direct mapping we need T MUX (T=No. of TAG bits) , so why aren't we taking MUX time into consideration?
0
@arjun sir

sir can u please show the hardware implementation of direct mapping in above question.I m really confused.
+4

@arjun sir :I agree number of Tag bits is $17$,

I agree that  only $1$ comparator will be used $(k \text{bit})$.

But i have in NPTEL lecture by Prof P.k Biswas,he is also using MUX.

Input to MUX will be the tag bits and select lines will be the byte (offset ) bit and the output will be the data we require.Then why you are ignoring the use of MUX?

+3

we need a 27 bit comparator and a 210:1 MUX incase if we use a fully associative cache ...

So Hit Latency = 2.7 ns + delay of 1024:1 MUX ...

+5

sourav. the question asks about the cache HIT latency and in the screenshot if u want to fetch the data also then u need a MUX HIT latency doesnt include the time needed for fetching the words it only is the time taken for KNOWING IF THIS BLOCK EXISTS OR NOT whereas the miss latency includes the time in which we fetch the block from the main memory Vicky rix  hope this helps  

0
@Venkat Sai then why in Set Associative Mapping Mux Delay is considered? (gate2006-74)

EDIT:
-----------------------------------------------------------------------------------------------------
There mux delay is considered for selecting block and is not the case of fetch.
Since in set associative mapping given p=2

so a 2X1 mux is needed.
0
For direct mapping we also have a need of multiplexer right?. But why don't we consider latency of multiplexer in direct mapping. If we consider then answer will be 1.7 ns + 0.6 ns =2.3ns.

I got confused, please explain me
0

@sourav. Here input to the MUX is not the tag bits, but rather all the words in a block. Right? (Acc to Prof Biswas's lecture.)

0

@Arjun sir , the block size is in bytes so in physical address why are we not converting it into bits as the physical address is 32 bits 

0
We access a "byte" of physical memory and not "bit"
0
It is $2-way$ then why not divide $2^{17}$ by one more $2^1$?
0
@satbir

Can you please explain?
+17 votes

word offset $=5 \text{-bit}$

block offset $=\dfrac{32\,Kb}{32}=10 \text{-bit}$

so tag bit $=32-10-5=17\text{-bit}$

hit latency$=\text{mux delay + comparator delay}$

1.  mux is not required in direct mapped cache coz we have only one comprator
 (IF IT IS $2\text{-way}$ SET ASSOCITATIVE THEN COMPRATOR WILL BE $2$ AND WE
 NEED A MUX OF $\text{2-TO-1}$ TO DECIDE HIT/MISS) so mux delay$=0.$

2.  comp. delay $=\dfrac{k}{10}=\dfrac{17}{10}=1.7$

so $h2 =1.7$

by Boss (11.1k points)
edited by
0
now in set associative 2 comparators are needed then( 1.8*2)+0.6 = 4.2 should be the ans?? am i right??
0
But in general, for Direct mapping we need T MUX (T=No. of TAG bits) , so why aren't we taking MUX time into consideration?
0
Mux will be required to select  particular block to be read however in direct mapped cache only one comparator needed so no need to use mux no need to add mux latency
0

that is line bit not block offset!

block offset =32Kb/32=10-bit

+8 votes

Cache Size = 32KB

Block Size = 32 Bytes (25(Number of word bits)

#Lines in cache = 32KB/32B = 1K lines

So, 10 bits are needed for line number of cache

5 bits for Word Number

Tag = 32-(10+5) = 17 bits.

So, comparator latency is k/10 = 17/10 = 1.7ns.

TAG LINE WORD
17 10 5

Here we won't need any multiplexer because  the line number is provided, the respective tag number of only that line is compared with the tag value. If if matches, hit occurs otherwise miss.

Now, since in hit/miss comparison only one comparator is involved, hit latency of this cache is 
Latency of K bit comparator (In case of direct mapped cache there is no need of MUX). Line number is provided and using that particular line is selected and tag bits of only that line is checked.
1.7 +0 = 1.7ns (d) ANS

Refer the below figure for more details.

Reference : Computer Organisation and Design : Henessy and David

by Boss (29.1k points)
edited by
0
but in direct mapped cache, there is no need of MUX right?
0
yes.
0
why mux is not required???i think mux is present here....explain in brief
+3
Why do you think we need a MUX here?

The decoder selects one of the cache lines using line number provided, and when that line is selected, the respective tag value is compared with the tag of given address and also it is checked that the entry of cache must be a valid entry( valid bit set-for memory protection).

Still if you are not convinced, I'd advise you to refer henessy once for this topic.
0

@Ayush Upadhyaya can you please explain if it is fully associative cache then what will be the hit latency and number of multiplexer and comparator needed?

+8 votes

Here 

Cache size =32 KB , Block size = 32 B . 

Therefore no. of cache lines = cache size / block size = 32KB/32B = 1K or 210  .

Now since we have to find out hit latency for Direct Mapped cache organization h2 :

Physical Address = 32 bits.

Block offset or Word offset = 5 ( as 25 is the block size)

Cache Line bits = 10 ( as no. of cache lines = 210)

Hence Tag bits = Physical address bits-(Block offset bits + Cache Line Bits)= 32-(10+5) =17.

Latency of comparator depends on no of tag bits as it compares the tag bits from the physical address with the output of the MUX . So here we have 17 tag bits so we need 17 bit comparator. As in question it is mentioned that for k-bit comparator latency is k/10 ns so latency of comparator in this case is 17/10 =1.7 ns

Also from h/w implementation of direct mapping we know the cache line bits (10 in this case) act as select lines for the MUX. Hence here we will need MUX which are all 210-to-1 . But in the question it is mentioned 2-to-1 MUX whose latency is essential to calculate hit latency of organization h1 that is 2-way set associative mapping and for our organization h2 that is direct mapping we have to consider MUX latency as negligible as it is not mentioned. So 

Total latency = Latency of comparator(Always 1 comparator is required in Direct mapping) + Latency of MUX(negligible here)

                     = 1.7 ns

by Junior (669 points)
Answer:

Related questions

Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true
50,737 questions
57,321 answers
198,400 comments
105,154 users