11.4k views
The read access times and the hit ratios for different caches in a memory hierarchy are as given below:

$$\begin{array}{|l|c|c|} \hline \text {Cache} & \text{Read access time (in nanoseconds)}& \text{Hit ratio} \\\hline \text{I-cache} & \text{2} & \text{0.8} \\\hline \text{D-cache} & \text{2} & \text{0.9}\\\hline \text{L2-cache} & \text{8} & \text{0.9} \\\hline \end{array}$$

The read access time of main memory in $90$ $\text{nanoseconds}$. Assume that the caches use the referred-word-first read policy and the write-back policy. Assume that all the caches are direct mapped caches. Assume that the dirty bit is always $0$ for all the blocks in the caches. In execution of a program, $60$% of memory reads are for instruction fetch and $40$% are for memory operand fetch. The average read access time in nanoseconds (up to $2$ decimal places) is _________

edited | 11.4k views
0
0.6*{0.8(2) + 0.2 *0.9 * (4) + 0.2*0.1*0.9(12) + 0.2*0.1*0.1(112) } + 0.4(90)

=37.64

is it correct???
+1
@bad_engineer.....for operand fetching it use D cache.as cache is using write back policy,so immediate write operation in main memory will not be done
0
6.30 is correct or not?
0
Will this question leads to "Marks to ALL" due to ambiguity  ?
+18
Average access time for instructions fetch =
$=0.6*0.8*2ns + 0.6*0.2*0.9(8ns + 2ns) + 0.6*0.2*0.1*(90 + 8 + 2)ns = 3.24 ns$

Average access time for operands fetch=
$= 0.4*0.9*2ns + 0.4*0.1*0.9*(8ns + 2ns) + 0.4*0.1*0.1*0.1*(90 + 8 + 2)ns = 1.48ns$
Average Read Access Time $=3.24ns + 1.48ns = 4.72ns$
0
@Manu

wat do u mean by "caches use the referred-word-first read policy " here?
0
@Samridhi, don't know! and that is even not required to answer this question.
0
0

why are we adding the times ? it is not given in question that all levels are serially accessed.

+7

This is the architecture mentioned in the question. 0
what if 30% dirty reads are present? What extra time we have to take?

0.3* memory block transfer??
0
referred word first read means  the desired word from the block is fetched from lower level memory first and send to cpu as soon as possible to resume execution and remaining words are fetched after this

hence here we have to use hierarchical access method
0

@Arjun Sir

I have a doubt when there is miss should we add previous access times

MAT = L1 Access Time + L1 Miss Rate [ L1 Access Time + L2 Access Time] + L1 Miss Rate * L2 Miss Rate[Memory Access time + L1 Access Time + L2 Access Time]

and

in which case we multiplied with hit rate as given in

https://www.geeksforgeeks.org/multilevel-cache-organisation/

+1

@Manu Thakur sorry for pointing out this typo error but sir it should be only two times 0.1 for operand fetch calculation. [Your approach is simple & elegant]

L2 cache is shared between Instruction and Data (is it always?, see below)

= Fraction of Instruction Fetch * Average Instruction fetch time + Fraction of Data Fetch * Average Data Fetch Time

Average Instruction fetch Time = L1 access time + L1 miss rate * L2 access time + L1 miss rate * L2 miss rate * Memory access time

$\quad= 2 + 0.2 \times 8 + 0.2 \times 0.1 \times 90$
$\quad= 5.4 \text{ ns}$

Average Data fetch Time = L1 access time + L1 miss rate * L2 access time + L1 miss rate * L2 miss rate * Memory access time

$\quad = 2 + 0.1 \times 8 + 0.1 \times 0.1 \times 90$
$\quad= 3.7\text{ ns}$

So, average memory access time

$$= 0.6 \times 5.4 + 0.4 \times 3.7 = 4.72\text{ ns}$$

Now, why L2 must be shared? Because we can otherwise use it for either Instruction or Data and it is not logical to use it for only 1. Ideally this should have been mentioned in question, but this can be safely assumed also (not enough merit for Marks to All). Some more points in the question:

Assume that the caches use the referred-word-first read policy and the writeback policy

Writeback policy is irrelevant for solving the given question as we do not care for writes. Referred-word-first read policy means there is no extra time required to get the requested word from the fetched cache line.

Assume that all the caches are direct mapped caches.

Not really relevant as average access times are given

Assume that the dirty bit is always 0 for all the blocks in the caches

Dirty bits are for cache replacement- which is not asked in the given question. But this can mean that there is no more delay when there is a read miss in the cache leading to a possible cache line replacement. (In a write-back cache when a cache line is replaced, if it is dirty then it must be written back to main memory).

by
edited by
0
@arjun sir the AMAT you have mentioned is simultaneous access but in question it is mentoned as hierarchial?

plz help
+2
Is it really simultaneous access formula?
0
Sir,if question says 30% are  dirty bits or something like that ,then we will consider the bits also because it is write back cache and if block to be replaced is dirty we first need to move that block from cache to main memory.Here we are ignoring because these are 0,is this correct?
0
@Rahul Yes.
0
@arjun sir no  but it looks like that. i may be wrong. can you plz explain?
+3
@sowmya

no it isnt

look at it carefully  t1 + (1-h1) t2 + (1-h1)(1-h2)t3

simultaneous access has h1 *t1 in the beginining and not just t1

the above is a simplified version of the formula

h1.t1 + (1-h1)h2(t1+t2)  + (1-h1)(1-h2)(t1+t2+t3)
+3
How came to know that here we have to use hierarchial access  ? why not simultaneous?
0
Hi @arjun Sir or anyone can please explain below:

You have used the hierarchical formula here. Could you please explain why we have used hierarchical not the simultaneous one?
+4

Question mentions the word memory hierarchy therefore go for hiearachial approach

0
@ Arjun sir ,  why have u used sequential acess formula in this question , by deafult we assume simultaneous acess right?

pls correct me if i'm wrong bcoz till now many questions i have done and there was no mention of which one to use, so i have solved them using simultaneous way method and all of them were correct . so i think this is the default method if nothing is mentioned ?
0

@Arjun sir,

please clear the formula, a generalised one, or derive how you got that. please. totally confused !!

+1
Always remember that you have to use hierarchical organization. Don't get confused.
0
it is the most general formula for hierarchical access. what is ur doubt?
0
Referred-word-first read policy means there is no extra time required to get the requested word from the fetched cache line.

i think Referred-word-first read policy means that the referred word is first brought from memory to cache and then can be accessed without waiting for the entire block to be transferred.
0

@Arjun sir, is write back considered between L1 and L2 cache also or only between L1 and main memory?

0
yaa, now clear, i was not getting referred word policy. thanku for explaination
+3

0

Can someone explain this part please.

L1 miss rate * L2 miss rate * Memory access time

0
(1-h1)(1-h2) memory access time

Where h1 hit rate of L1 h2 is hit rate of L2
+1
No both are hierarchical access

First is simplified form of 2nd formula