in CO and Architecture edited by
24,272 views
75 votes
75 votes
The read access times and the hit ratios for different caches in a memory hierarchy are as given below:
$$\begin{array}{|l|c|c|} \hline \text {Cache} &  \text{Read access time (in nanoseconds)}& \text{Hit ratio} \\\hline \text{I-cache} & \text{2} & \text{0.8} \\\hline \text{D-cache} & \text{2} & \text{0.9}\\\hline \text{L2-cache} & \text{8} & \text{0.9} \\\hline \end{array}$$
The read access time of main memory in $90\;\text{nanoseconds}$. Assume that the caches use the referred-word-first read policy and the write-back policy. Assume that all the caches are direct mapped caches. Assume that the dirty bit is always $0$ for all the blocks in the caches. In execution of a program, $60\%$ of memory reads are for instruction fetch and $40\%$ are for memory operand fetch. The average read access time in nanoseconds (up to $2$ decimal places) is _________
in CO and Architecture edited by
by
24.3k views

4 Comments

@Arjun Sir

I have a doubt when there is miss should we add previous access times

MAT = L1 Access Time + L1 Miss Rate [ L1 Access Time + L2 Access Time] + L1 Miss Rate * L2 Miss Rate[Memory Access time + L1 Access Time + L2 Access Time]

and

in which case we multiplied with hit rate as given in

https://www.geeksforgeeks.org/multilevel-cache-organisation/

0
0

@Manu Thakur sorry for pointing out this typo error but sir it should be only two times 0.1 for operand fetch calculation. [Your approach is simple & elegant]

1
1
why 3 times 0.1 is added in the question? @manu
0
0

4 Answers

106 votes
106 votes
Best answer

$L2$ cache is shared between Instruction and Data (is it always? see below)

So, average read time

$=$ Fraction of Instruction Fetch $\ast $ Average Instruction fetch time $+$ Fraction of Data Fetch $\ast$ Average Data Fetch Time

Average Instruction fetch Time $= L1$ access time $+ L1$ miss rate $\ast \;L2$ access time $+ L1$ miss rate $\ast\; L2$ miss rate $\ast $ Memory access time

$\quad= 2 + 0.2 \times 8 + 0.2 \times 0.1 \times 90$ 

$\quad= 5.4 \;\text{ns}$

Average Data fetch Time $= L1$ access time $+ L1$ miss rate $\ast \;L2$ access time $+ L1$ miss rate $\ast \;L2$ miss rate $\ast $ Memory access time

$\quad = 2 + 0.1 \times 8 + 0.1 \times 0.1 \times 90$ 

$\quad= 3.7\;\text{ns}$

So, average memory access time

$$= 0.6 \times 5.4 + 0.4 \times 3.7 = 4.72\; \text{ns}$$


Now, why $L2$ must be shared? Because we can otherwise use it for either Instruction or Data and it is not logical to use it for only $1.$ Ideally this should have been mentioned in question, but this can be safely assumed also (not enough merit for Marks to All). Some more points in the question:

Assume that the caches use the referred-word-first read policy and the writeback policy

Writeback policy is irrelevant for solving the given question as we do not care for writes. Referred-word-first read policy means there is no extra time required to get the requested word from the fetched cache line.

Assume that all the caches are direct mapped caches.

Not really relevant as average access times are given

Assume that the dirty bit is always 0 for all the blocks in the caches

Dirty bits are for cache replacement- which is not asked in the given question. But this can mean that there is no more delay when there is a read miss in the cache leading to a possible cache line replacement. (In a write-back cache when a cache line is replaced, if it is dirty then it must be written back to main memory).

edited by
by

4 Comments

Do we have to know the architecture beforehand? Otherwise, how can we know that L2 is shared?
0
0
In  science our main motto is reduce the delay as much as possible and get the maximum efficiency

so simultaneous axis is default because here because delay is less. if explicitly mention use level order or hierarchical  then use hierarchical.
0
0

@saheb sarkar1997 bro you are wrong because default access method is hierarchical access. 

Source:- 

Simultaneous and Hierarchical Cache Accesses - GeeksforGeeks

0
0
5 votes
5 votes

We use hierarchical access

 

Using I cache:

Tavg1= H1T1 + (1-H1)(H2)(T1 + T2) + (1-H1)(1-H2)(T1+T2+T3)

        = (0.8*0.2) + (0.2)(0.9)(10) + (0.2)(0.1)(100)

        = 5.4 ns

 

Using D cache,

Tavg2 = H1T1 + (1-H1)(H2)(T1 + T2) + (1-H1)(1-H2)(T1+T2+T3)

        = (0.9*0.2) + (0.1)(0.9)(10) + (0.1)(0.1)(100)

        = 3.7 ns

 

Now Tavg = (60% of Tavg1) + (40% of Tavg2)

              = 4.72 ns

1 vote
1 vote

Useful solutions

0 votes
0 votes

..…...….….....…………………….…

 

Answer:

Related questions