The Gateway to Computer Science Excellence
+10 votes

64 word cache and main memory is divided into 16 words block.The access time of cache is 10ns/word and for main memory is 50ns/word. The hit ratio for read operation is .8 and write operation is.9. Whenever there is a miss in cache, associated block must be brought from main memory to cache for read and write operation. 40% reference is for write operation. Avg access time if write through is used.

in CO and Architecture by Active (1.6k points)
edited by | 2.4k views
what is the default memory organization- simultaneous or hierarchical?? this is the biggest doubt.. PLzzz Plzzz some one clear it.

Sir i have small confution

TavgR=hr×tc+(1−hr)×(tm+tc)  in given formula what tc is. tc=cache access time before new block come to main memory or cache acces time after new block is come to MM and then access is taken from cache.

I think in given formula will be hr*tc + (1-hr)*(tm+2*tc) bcoz in case of miss cache is access 2 time, before block came from MM and after block come into MM.

please clear my confution @Arjun

by default we use simultaneous .
By default we use hierarchical, it is more practical.

simultaneous is like parallel access which is not easily practical inside a CPU where things happen at nanosecond intervals.

1 Answer

+17 votes
Best answer
Cache access time = 10 ns
1 block main memory access time $= 50\times 16 = 800 ns$ (as from main memory, entire cache block is retrieved)

Then use this formula
$T_{avg_R} = h_r \times t_c+(1-h_r)\times (t_m + t_c) = 0.8 \times 10 + 0.2 \times (800 + 10) = 170 ns$ (Hierarchical access is default in case of read)

Whenever cache is missed, data (entire cache block) must come from main memory for write as per question. Also for all write operations, one word of data is written to main memory as cache is WRITE THROUGH. In WRITE THROUGH cache since main memory is always updated, memory arrangement is simultaneous and hence cache access time need not be considered (as it should be smaller than main memory access time and both happens in parralel).

$T{avg_W} = h_w\times t_m + (1-h_w)  (t_m + 800 )$

$= 0.9 \times 50 + 0.1 \times 850 = 130 ns$

$T_{avg}= f_r\times T_{avg_R}+ f_w\times T_{avg_W} = 0.6 \times 170 + 0.4 \times 130 = 154 ns.$
by (199 points)
edited by

@Arjun sir,

 access time of cache is 10ns/word is given and 64 word cache

Cache access time = 10* 64 =640 ns totally right ? Please clarify?

whole cache is not fetched- only cache line is fetched.
it is 64 word cache..okay,but we will fetch only 1

but suppose it was given that maim memory access=20ns,but when a miss occurs,we need to bring entire block, then it would have neeb multiplied(for MM)
Someone please tell me what is the actual meaning of cache access time . Is it the time required to fetch a byte or time required to search cache or is it the sum of both?


cache access time  includes :

 The time required to search cache  and then fetch a byte ,

yes it is sum of both .


64 word cache and main memory  and 16 word block


then shouldnot cache contains 4 blocks only? and this also same for main memory?

@arjun sir, please confirm.

As it is mentioned in the question, "Whenever there is a miss in cache, associated block must be brought from main memory to cache for read and write operation".

In case of read miss, we will spend 10ns in cache to check it is present or not + 800 ns to bring the block to cache + further 10 ns to read the required data.

In case of write hit, 50 ns to write the word in the main memory. ( Assuming simultaneous because it is write through policy).

In case of write miss, 50 ns to write the word in the main memory + 800 ns to get the block into the cache.


In case of read miss, we will spend 10ns in cache to check it is present or not + 800 ns to bring the block to cache + further 10 ns to read the required data.

We don't need to add 10ns two times. When data is sent from main memory to cache it can simultaneously go to CPU also (sniffing technique). Otherwise even if a repeat read happens from cache; we don't need to assume hierarchical access anymore and we can assume simultaneous access (refer examples in Hamacher). i.e., in all cases add the cache times only once for a miss.

In case of write miss, 50 ns to write the word in the main memory + 800 ns to get the block into the cache.

yes, for this question. But it will be really rare for such an assumption to be given in an actual GATE question nowadays. So, read the question really carefully during GATE -- most of the 5+ year old questions are kind of not relevant in this area now. Keep the concepts clean; you can answer in GATE even if not in test series.

Sir, in write-through policy, Processor takes a lot of time because of Memory Write.

Is this problem can be solved by using the write buffer to hold those write requests and to be done by some other units.? And allow the processor to proceed to next instruction immediately.
yes, practically the processor does not really wait. You can think like this "if you can think of an optimization; the professors and researchers working just on this problem would have thought of it already and implemented them."
Thank you sir :)
why we are adding 800 here in calculating average access time for write as in write through we use no write allocate policy

@Arjun sir , @sushmita :

i understood the answer. but why it isn't like this as

0.9 * 10 + 0.1 * 800    : Case 1

instead of,

0.9 * 50 + 0.1 * 850    : Case 2

for write hit (write-through)

 both the cache location and the main memory location are updated.

for write miss (write-through)

When a Write miss occurs in a computer that uses the write-through protocol, the information is written directly into the main memory.

for write hit, we'll directly write to the cache by accessing the word, for this it would take 10ns and that simulataneous updation in main memory may be done by circuitry.

for write miss, we'll directly write to the main memory as it is write through, for this it would take 800ns, since it is block addressable


highlited lines in italics are from hamacher.

please explain.



One thing to remember for Write through cache, "Each miss (read or write) reads a block from memory and Each store writes an item(not a block) to memory"

(Source- , slide-22)

For Write Hit(Write through), Cache & Memory both will be updated simultaneously.  Since memory write takes longer time than cache write, we need to consider writing time of a word to memory only - that's why we are using 0.9*50 ns = 45ns.

For Write Miss(Write through & Write Allocate), we need to (bring a block from memory + then write a word simultaneously to cache and memory). So total time required for Write Miss is 0.1* (800+50)=0.1*850ns =85ns


@dan31 : tx for your expalination, i got u.

suppose if this is write-back protocol instead of write-through, how the calulation would be ??

Read :

0.8 * 10 + 0.2 * (800 + 10)

Write :

0.9 * (10 + updation time to MM when line is flushed) + 0.1 * (800 + 10 + updation time to MM when line is flushed)

am i right ??

and if yes, what could be that updation time, will it be given in que ?


The read expression is correct .

"updation time to MM when line is flushed" should not be added to each write hit/miss as you have given.

There must be some data mentioned in the question when should we modify the main memory. 

Check this one for some clarity-


but in write through technique there is no write allocate policy used for write miss
the solution given in the question has used write through technique with write allocate policy.
But how will we come to know when in write through policy we have to use write allocate or not
Okk got it here in write through policy in question it has been  asked to consider the write  allocate  policy
Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true
50,650 questions
56,192 answers
94,861 users