The Gateway to Computer Science Excellence
First time here? Checkout the FAQ!
x
+26 votes
5.1k views

Consider two cache organizations. First one is $32 \hspace{0.2cm} KB$ $2-way$ set associative with $32  \hspace{0.2cm} byte$ block size, the second is of same size but direct mapped. The size of an address is $32 \hspace{0.2cm} bits$ in  both cases . A $2-to-1$ multiplexer has latency of $0.6  \hspace{0.2cm} ns$ while a $k-bit$ comparator has latency of  $\frac{k}{10} ns$. The hit latency of the set associative organization is $h_1$ while that of direct mapped is $h_2$.

The value of $h_1$ is: 

  1. $2.4  \text{ ns} $
  2. $2.3  \text{ ns}$ 
  3. $1.8  \text{ ns}$ 
  4. $1.7  \text{ ns}$ 
asked in CO & Architecture by Active (3.7k points)
edited by | 5.1k views
+2
question says 32 kb cache ? Is this typo ?
0
following..
0
yes typo corrected now
+3

For value of $h_{2}$ refer -> https://gateoverflow.in/43565/gate2006-75

+2
For $k-way$ set associative, $k$ comparators are needed. Direct mapped cache is $1-way$ set associative. Hence, it also requires a comparator. But $MUX$ is needed for data/block selection from multiple blocks which is not needed in Direct mapped cache as there is only single block in direct cache  which whether to select or not be known from comparator only.
0
@Aghori In Direct mapping as soon as index and tag bits are matched then to decide which word should trasfer to processor? for this we have word offset (in this case its 5Bits)

so to decide which word would be transfer mux will be required or not??

3 Answers

+30 votes
Best answer

Cache size is $32 \hspace{0.2cm} KB$ and cache block size is $32 \hspace{0.2cm} B$. So,

$\text{number of sets}  = \dfrac{\text{cache size}}{\text{no. of blocks in a set } \times \text{ block size}}$

$ = \dfrac{32 \hspace{0.2cm}KB}{2 \times 32 \hspace{0.2cm}B} = 512$

So, number of index bits needed $= 9$ ( since $2^9 = 512$). Number of offset bits $= 5$ (since $2^5 = 32 \hspace{0.2cm} B$ is the block size and assuming byte addressing). So, number of tag bits $= 32 - 9 - 5 = 18$ (as memory address is of $32 \hspace{0.2cm} bits$). 

So, $\text{ time for comparing the data}$ $ \text{= Time to compare the data + Time to select the block in set} \\= 0.6 + 18/10 \text{ ns} \\= 2.4 \text{ ns}.$

(Two comparisons of tag bits need to be done for each block in a set, but they can be carried out in parallel and the succeeding one multiplexed as the output).

Reference: https://courses.cs.washington.edu/courses/cse378/09au/lectures/cse378au09-19.pdf

 

answered by Veteran (355k points)
edited by
0
why we not add 0.6   in case  2 .
+12

Second one is direct mapped cache. In an associated cache (here it is 2-way), there are two slots for a cache set entry. So, we need to compare the tag in both the entries and we select the one that matches, and for this a multiplexer is used. We don't need this multiplexer in the case of direct mapped cache. 

+2
..
+1
sir . one big question to my mind is. why we have not considered the delay of mux. we have seen that the delay for 2*1 is given and we can definitely derive the total number of 2*1 muxs used to implement the 2^10 *1 mux as stated in digital logic. we are taking the delay of or gate even when they have not mentioned it and neglating that we can use the 2* to make 2^10*1 are we moulding the question according to the answer. what i think if we can make it then they should be done .
+2
question says 32 kb cache ? Is this typo ?
0
In direct mapping if we want 2 power 10 to 1 MUX then why not realize it using 2 to 1 MUX  using various levels and take 0.6 for every level that we got . Infact he has not mentioned the OR gate delay we can take it zero but why are we taking 0 for mux.
0
Sir pls help me with this doubt..

(Suppose there are 4 sets and each set has 2 lines...and Tag bits are suppose 16)

First a set will be selected...for that we will be needing a 1×4 Demux.....latency of Demux required....(1)

Than

Each line of the selected set will be compared in parallel..so 2 comparators required of size 16 bits.....latency of comparators required....(2)

 

Once the line is selected,

a 2×1 Mux will be used to selected that block to transfer.....so latency of MUX is required....(3)

So total latency = (1) + (2) + (3)

Now all m asking u is to pls put small light on latency of Demux (1)..we ignored it just bcoz nothing is mentioned about in question ?
0
0
what if the cache was 4-way set associative and there were only 2:1 MUX ? so how many 2:1 MUX's would be used? 2 or 3? (i.e.0.6 ns latency would be counted twice or thrice?)
0
@just_bhavana

irrespective the associativity of cache we  need always one OR(or 2:1 MUX) gate because we only have to decide if required block is present in cache or not.
0

Comparison is done in parallel and we need to select data for which we require 2*1 mux.

0
@Hemant dont you think this diagram is wrong. It does not uses block offset at all. Also notice the 2nd comparator (=) matching tag has only one input!!! With what it is matching tag?
+1

@Arjun sir its really confusing here. Now that I have taken so much of effort, please please clarify.

I guess following steps are performed in 2-way set associative mapping

  1. Index set
    With 9 bits (this can be done with inputting 9 bits to DeMUX address input, DeMUX will in turn will make corresponding index bit output HIGH. But information of such indexing DeMUX or any other approach is not given, so assuming negligible)
  2. Match line tag inside set indexed in above step
    With 18 bits to select appropriate line with the set (as stated this can be done 18/10=2.4 ns)
  3. Offset word inside line chosen in above step
    With 5 bits. Again no information is done here about this, but can be done with MUX. Applying least significant 5 offset bits as MUX address lines and input will be data bits of different words within a line chosen. Output will be data bits of word at given offset. (Somewhat more involved since each word is multi-bit, but anyway we have to select one word present at given offset in set of many words. So many to one is functionality of MUX)

Doubts

  1. No information about how to index set (Can done with DeMUX as explained above point 1)
  2. No information about how to offset word inside chosen line (can be done with MUX as shown in the diagram below)
  3. Not getting where to use that 2-to-1 MUX stated in question. I dont understand for what you have used it while adding its delay 0.6 to 1.8ns. You have sid "succeeding one multiplexed as the output". But I really dont get how we multiplex like that. I feel the reference circuit diagram is flawed:

0
very good explanation arjun sir
+35 votes
 

gives a better insight.

answered by Boss (30.7k points)
0
the info is not given. is the info about or gate given ??. not given why not consider that zero. we know we can implement the 2^10*1 mux using 2*1 muxs . so why we have not derived the total number of muxs which will be required , and then calculated , and a mux of 1*2^9. i think u missed that . plz recheck and edit the image .
0

Why do we need 210:1 mux? Wh not a 10:210 decoder? Alo, u have written 9:2Mux. Should it not be a decoder?

+2
The figure is not correct- See the one at end of the accepted answer.
0
in set associative why 2^9 * 9 MUX needed?? Why not 2^9 *1 MUX?? See the pic you have given??
+3 votes

In Direct Mapped Cache

Hit Latency = 210to 1 MUX Delay + n-bit tag comparator  Delay
                    =  0 + 17/10 =1.7ns 

  ( No information Regarding 210to 1 MUX delay . So let us assume it to be 0 )

only 1 comparator and 17  210-to-1 MUX are required but  MUX are operating in parallel


2-way Set Associative Cache

Hit Latency = 29to1 MUX Delay + Comparator Delay + OR gate delay

                   = 0 + 18/10 + 0.6 = 2.4 ns


18(= 2 comparators per set )  324 (= 36 29to1 MUX  per set ) 9 (= 1 OR gate per set ) are required.
comparators are operating parallelly and MUX are operating parallelly . OR gate can be realised usuing 2to1 MUX

answered by Boss (22.1k points)
edited by
+3
For direct mapped, why 17 MUX? Because after a cache entry is selected, we have only one possible data source. We don't again need to multiplex here. We use a multiplexer when there are multiple active inputs and we want to avoid some of them.
0

arjun sir.....amarVashishth diagram  for 2-way associative is using two comparators////but if ...first mux is  placed..then after comparator is placed....then only we can check it is hit/miss...mux can't compare...comparator can compare..

+1

@arjun Sir..
We have got 17bit tag and one mux is used to select and compare 1 bit from tag bits  . so we ned 17 MUX to select and compare 17 bit tag.

+1
@arjun sir

In direct mapped we have 2^10 X 1 MUX which will select any one line(cache block) of the cache once line is selected then MUX gives output which is single bit of the corresponding TAG(which line is selected)

now we have total 17 bits in tag so we need 17 MUX ....

No of MUX in direct mapped = no. of bits in TAG = size of comparator

size of each MUX = N X 1  where N is the no. of lines in cache memory

no. of comaparator = 1
0
See the figure at end of my answer- all hand drawn figures for this question are wrong :(

If you have any query about that figure we can discuss..
0
@arjun sir..  I still don't get why my figure is not correct @cse23 has given explanation very nicely.  What I know is exactly same.

Please correct us if we are wrong.
+1
First of all we have 10 bits for indexing. How this is used in 1024-1 MUX?
0
Size of the MUX=2^index bits to 1
that means in direct mapping 10 bits are used for indexing,therefore 2^10 to 1= 1024 to 1 mux is used.In question nothing is specified about 2^10 to 1 mux latency so we take this as zero
+1
But arjun sir in direct mapped cache we need the MUX to select the line no as shown in hand made diagrams. That is all i know. Its okay that we will consider their delay as negligible but why are these pic wrong??
0

9 bits to represent 2^9 sets.

2*2^9(= 2 comparators per set )  36*2^9 (= 36 29to1 MUX  per set ) 2^9 (= 1 OR gate per set ) should be the answer @pC . please correct if wrong.

0

@Arjun  sir 

We need 

For direct-mapped cache :

                   1) Block-index decoder,

                   2) comparators

For fully associative cache

                   1) comparators

                   2) Block-selection MUX

For Set-associative cache,

                   1) set index decoder,

                   2) comparators,

                   3) Way-selection MUX.

Is this correct ??? Also we can use a decoder to find the correct word within a block using Block-offset bits ...right ??? 

0
only one 1 MUX for Direct mapping
Answer:

Related questions



Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true

38,079 questions
45,572 answers
132,069 comments
49,045 users