The Gateway to Computer Science Excellence
+16 votes
  • What is the size of $MUX$ needed in direct mapped cache ?

For ex :- | Tag = $17$ | line = $10$ | word = $5$ |

Diagram Reference :-  Direct mapped cache with multi word block 

  • In set associative cache, Do MUX and OR gate work in parallel,i.e

What i mean to ask that data is loaded in MUX and OR checks for hit/Miss parallely ? If hit, then data is given . Then, shouldn't this be a disadvantage for set associative cache ? 

  • Also, if MUX and OR gate work parallely, then Hit latency includes both delays ?

Diagram Reference :- 

  • Also, any reason/explaination regarding below image ?

in CO and Architecture by Veteran (50.9k points)
retagged by | 983 views
for direct mapped cache,since we need to select a particular line in which are dazta is residing,we need to give number of lines as input to th MUX.
so here 2^10 : 1 MUX will be required,and further ,after selecting a line,it will compare its tag with the address one comparator is enough.
and in set associitve,i think since we need to check a aprticular set,so we need MUX of size k:1.and number of comparators will be k only,as the set being selected has k lines.

I have read those questions. These questions are made from those questions only.

yes roughly we can say MUX and OR gate works parallely in cache memory . OR gate used to TAG comparison to select set, while MUX is used for select which data bits are valid.

In hit data is present in cache . So, why do u think it is disadvantage of set associative cache?

If data already present in cache, then just read the data once, and valid bit will be 0. But u have to go perticular set and get the data. Moreover if data is not present in data buffer , u have to  check different policy - either write back or write through policy, to see which is better to get data in less time.

Another one thing is, write allocate and write not allocate

If data is already stored, but not loaded yet - and we are getting hit- then we can say it is write not allocate.

Otherwise if we need to load data everytime to get a hit, that is write allocate.

In all these cases we have to include MUX and OR gate delay too, because those are tools select lines , as per given policy. So, if both delays are given then must have to include them.

yes in full associtive cache no indexing is required. Data can place anywhere in cache. No perticular arrangement is there. But TAG and word bit are there. Means MUX and OR bit used here too. Moreover they are interrelated. Selected bit of MUX is used in OR gate to get which data is hit and which one miss.



What about $1$st question ?

Thanks for other two questions. I got it :)

it requires MUX of size 1: 2^10 because these number of lines has to be looked for comparison with tag field 

Please log in or register to answer this question.

Related questions

Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true
50,737 questions
57,312 answers
105,046 users