894 views
• What is the size of $MUX$ needed in direct mapped cache ?

For ex :- | Tag = $17$ | line = $10$ | word = $5$ |

Diagram Reference :-  Direct mapped cache with multi word block

• In set associative cache, Do MUX and OR gate work in parallel,i.e

What i mean to ask that data is loaded in MUX and OR checks for hit/Miss parallely ? If hit, then data is given . Then, shouldn't this be a disadvantage for set associative cache ?

• Also, if MUX and OR gate work parallely, then Hit latency includes both delays ?

Diagram Reference :-

• Also, any reason/explaination regarding below image ?

retagged | 894 views
+4
for direct mapped cache,since we need to select a particular line in which are dazta is residing,we need to give number of lines as input to th MUX.
so here 2^10 : 1 MUX will be required,and further ,after selecting a line,it will compare its tag with the address tag.so one comparator is enough.
+1
and in set associitve,i think since we need to check a aprticular set,so we need MUX of size k:1.and number of comparators will be k only,as the set being selected has k lines.
+3
+1
@pC

I have read those questions. These questions are made from those questions only.
+2

yes roughly we can say MUX and OR gate works parallely in cache memory . OR gate used to TAG comparison to select set, while MUX is used for select which data bits are valid.

In hit data is present in cache . So, why do u think it is disadvantage of set associative cache?

If data already present in cache, then just read the data once, and valid bit will be 0. But u have to go perticular set and get the data. Moreover if data is not present in data buffer , u have to  check different policy - either write back or write through policy, to see which is better to get data in less time.

Another one thing is, write allocate and write not allocate

If data is already stored, but not loaded yet - and we are getting hit- then we can say it is write not allocate.

Otherwise if we need to load data everytime to get a hit, that is write allocate.

In all these cases we have to include MUX and OR gate delay too, because those are tools select lines , as per given policy. So, if both delays are given then must have to include them.

https://courses.cs.washington.edu/courses/cse378/09au/lectures/cse378au09-19.pdf

yes in full associtive cache no indexing is required. Data can place anywhere in cache. No perticular arrangement is there. But TAG and word bit are there. Means MUX and OR bit used here too. Moreover they are interrelated. Selected bit of MUX is used in OR gate to get which data is hit and which one miss.

Refer:Hamacher

0
@Srestha

What about $1$st question ?

Thanks for other two questions. I got it :)
0

it requires MUX of size 1: 2^10 because these number of lines has to be looked for comparison with tag field