retagged by
3,492 views
0 votes
0 votes
Is there any multiplexer(s) present in the implementation of Direct Mapped Cache?

If yes, then the Hit latency would be Multiplexer latency + Comparator Latency?
retagged by

2 Answers

2 votes
2 votes

The previous ans given by me is wrong.

$\textbf{Multiplexer}$ is used in the Direct mapped cache.

The format of Direct mapped cache is

      Tag       Line Offset             Word Offset

Each main memory block gets mapped to a $\textbf{fixed block}$ in the cache memory and this is decided by formula

$\textbf{cache line( or block) = k % S}$ where

k is a block of main memory

S is the number of blocks of cache memory

CPU generated address is searched in the directed mapped cache format.

When CPU is looking for data then it is first searched in the cache memory by using $\text{Line Offset}$.

$\text{Why Line Offset is used first?}$ This is because each main memory block has contention to get mapped to same cache block. Once a block is selected then its tag is compared with CPU generated tag, if tag is matched then then one of the word from the selected block is sent to the CPU.

$\text{How a word is fetched from the selected cache block? }$ All the words in a block becomes input to the multiplexer and the $\text{CPU generated Word Offset}$ becomes the select line of the multiplexer which enables the particular input line and the desired word appears at the output from the set of words in a block.

edited by
0 votes
0 votes

There is no multiplexer present in the $\text{Direct mapped cache}$ but it is used in $\text{Set associative cache}$. The format of Direct mapped cache is

      Tag                         Line Offset             Word Offset

So only comparator is required to to match with tag bits, and once tags are matched that line is selected.

$\textbf{PS:- Edit}$

There are two scenarios:

$\underline{\textbf{1) When Cache is empty or some needed block is not present in the cache : -}}$

Each main memory block gets mapped to a fixed block in the cache memory. So when a block is not present in the cache then main memory block is added to particular fixed block in cache memory and this is decided by formula

$\textbf{cache line( or block) = k % S}$ where

$\text{k is block of main memory}$

$\text{S is number of blocks of cache memory}$

$\underline{\textbf{2) When desired cache block is present in the cache memory: - }}$

When CPU is looking for some data then it searches within the blocks present in the cache memory. In order to find the correct cache block(or line) it uses comparator to match with tag bits. If the one of the tag bits gets matched with the CPU generated tag bits then that block is send to the CPU otherwise(when desired block is not present in the cache memory) desired main memory block is brought to the cache memory using above formula $k \% S$.

For more reference: Refer slide number 14

https://web.stanford.edu/class/ee282h/handouts/Handout28.pdf

reshown by

Related questions

0 votes
0 votes
2 answers
1
Mojo-Jojo asked Jan 11, 2016
1,231 views
If we are given direct mapped cache, we have to add Multiplexer delay to the comparator delay ?
0 votes
0 votes
1 answer
2
0 votes
0 votes
0 answers
3
Na462 asked Aug 4, 2018
573 views
Ans. 1
1 votes
1 votes
1 answer
4