Log In
39 votes
The access times of the main memory and the Cache memory, in a computer system, are $500$ $n sec$ and $50$ $nsec$, respectively. It is estimated that $80$% of the main memory request are for read the rest for write. The hit ratio for the read access only is $0.9$ and a write-through policy (where both main and cache memories are updated simultaneously) is used. Determine the average time of the main memory (in ns)
in CO and Architecture
edited by

This is my perception of the Question, ping me if I made a mistake!


We know that,

Avg. Memory access time = (hit ratio of cache * Time spent for accessing Cache) + (hit ratio of main memory * Time spent for accessing Main memory)

Here, in question we have been asked to find average time of the main memory and not the whole memory access time. So, we just need to do calculations for main memory.

Avg. Main Memory access time = hit ratio of main memory * Time spent for accessing main memory

But, main memory access time differs in read and write accesses. Hence,

Avg. Main memory access time = hit ratio of main memory * ( read access time + write access time )

Lets calculate read access and write access time separately.

Considering 80% read accesses and 0.9 hit ratio and hierarchical access,

Read access Time = 0.8 * 0.1 * (50 + 500) = 44 ns

Considering 20% write accesses and simultaneous access,

Write access Time = 0.2 * (500) = 100 ns

i.e Avg. Main memory access time = 100 + 44 = 144 ns


@Arjun sir, in previous gate 1 CO and Architecture, Options / blank space( for integer) for question 19,31 and 32 is missing.Please make the necessary changes if possible.

Thanks. Have fixed them.

@Arjun sir in question only read hit given how you find write hit.

is read hit=write hit(is just assumption in this question)

The hit ratio for the read access only is 0.90.9 and a write-through policy,

What does this line means, does it means like for read operation hit ratio of cache is given I am confused ?
but in case of hit why we write on cache ?

@arjun sir i am not getting your answer

@val_pro20 @codeirtam yes read hit is 0.9 ….In write through policy, write  access time(irrespective of cache hit or miss)=maximum(cache access time ,main memory access time) ..hence in this case  write access time=max(50,500)=500 as updating cache and MM done simultaneously

Refer this video for better understanding ,it will clear your doubts..


4 Answers

59 votes
Best answer

Average memory access time =  Time spend for read + Time spend for write

= Read time when cache hit + Read time when cache miss + Write time when cache hit + Write time when cache miss

$= 0.8 ⨯ 0.9 ⨯ 50 + 0.8 ⨯ 0.1 ⨯ (500+50) $

(assuming hierarchical read from memory and cache as only simultaneous write is mentioned in question) $+ 0.2 ⨯ 0.9 ⨯ 500 + 0.2 ⨯ 0.1 ⨯ 500 $ (simultaneous write mentioned in question)

$= 36 + 44 + 90 + 10 = 180 ns$


edited by

Write through policy is used, so

- cache will always have updated data.

- every write has to go to main memory

So why [Write time when cache hit + Write time when cache miss]   is done

Can't it be simply 20(max(500 , 50)) ...max b'coz memory and cache updation are simultaneously done.

Say 100 request , 80 = read,   20 = write

read hit = 90% of 80 = 72,   read miss = 8 , write = 20

total time = 72(50) + 8(50+500) + 20(max(500 , 50)) = 18000

avg time = 18000/100 = 180ns

You are correct- there is no need to split the time for write hit- and write miss. But that doesn't change the answer.

And I used hierarchical access. There is no hint as to what strategy is used in the question but it looks more like hierarchical is meant as its a 1992 question- simultaneous access would have come much later.
simultaneous updation is mentioned in the question.
Thanks - I had missed it. Simultaneous write is mentioned and so it must be hierarchical read and simultaneous write.
Why Hierarchical read ?

Can we assume anything  of this two , Simultatenous / Hierarchical read default if similar question appears in GATE now ? If yes, which one ?
default is hierarchical for read/write but simultaneous for write for write through cache. Because that is the current practical implementation.
why hierarchical for read I wonder ? Why can't we just initialize read both reads parallel & maybe stop that reading from Memory if data becomes available in cache ? (Or maybe just ignore data we get from memory if we get hit)  Just wondering ! Can you support your statement on default on Hierarchical in case of read from any good reference ?
How is it wrong if $$T_{read} = 0.9 \times 50ns + 0.1 \times \big( 50 + 500 + 50\big)ns$$

why isn't miss penalty counted?

@Akash I also had this doubt. But guess the reason is that these accesses are being done at maximum speed possible. Say cache access speed is 20 ns and main memory access speed is 200 ns. So, once we know that cache hit occurred, we notify the memory to cancel its read. But this cancellation might take more time on main memory (RAM organization). Thus this might effectively slow down the next memory access. This is what I understood after taking to someone working on memory. May be in future with new memory technology this might happen. But I don't have a reference here. 

@Amar Why extra "50"? You mean data comes back to cache? Actually in the reverse path data simultaneously goes to cache as well as CPU. I guess this technique is called "sniffing


@Arjun Sir, I've the following queries:-

1.) It is mentioned in the question that "hit ratio" for read only access is 0.9. Then how can we assume about the hit ratio while calculating the Average Write access time? 

2) Is Hit ratio for write operation will be different from hit ratio of read operation?

sir since there is no hit or miss given for  write shouldn,t we write always whether hit or miss?sir why we are assuming 0.9 and 0.1 for write also?can we say, 0.2 *500 directly? because ultimately(0 * 500 + 1 * 500) * 0.2?
1 sir here you have approved that in cache miss during write write time+ main memory access time both will be counted. This cache questions are really confusing me.


You should read the lines in question. There, a special condition is given:

Whenever there is a miss in cache, associated block must be brought from main memory to cache for read and write operation

Like this anything can be given and we have to solve accordingly. All default cases to be considered I'll add in the coming gatecse book..

Got it sir.
In this question , we have to calculate average time of main memory, should not include cache memory .Explicitly mention in question.

Average time of main memory = Average time of read in main memory + Average time of write in main memory.
Can you please explain , why you mention access time of cache memory.

Since 80% of read occured in main memory.Means, When total Miss occured in cache (either for read or write), Out of total 80% are read and 20% are write. I think we should calculate access time of main memory only when cache miss occur, and dont include cache access time.

Can you please explain why you have mention cache hit and cache miss.
arjun sir with write miss whether we have to consider write allocate or no write allocate is the main confusion here. I mean why we are not considering the time to bring in the block from main memory to cache on write miss?? Why are questions so ambiguous. why cant gate questions mention everything clearly?? so many assumptions to make, its really confusing and annoying.
what will be the ans if write-through policy is not mentioned in the question?
Why only 500 ns is consider during write are we not considering cache access during write?
We consider simultaneous access in write operation of write through cache ONLY
Arjun sir what would be the answer in write back??
Simultaneous write will happen for sure in write through,but why are we considering simultaneous memory organization for write?
@Arjun Sir,   "The hit ratio for the read access only is 0.9"    For write operation, the hit ratio is not given, how 0.9 is taken for write too ?

Why are you taking "hit ratio for read access" in cache write? You should've given the reason for doing so. The answer written by sameer2009 looks more clear and understandable.

Practically is it possible to have a cache design that supports hierarchical reads and simultaneous writes?
@ Sushmita In case of write-back policy, we have to consider the time for writing back when a new data word comes and to accommodate it we need to replace one existing data word in the event of miss.
That is,

$ T_{avg} = 0.8*(0.9*50+0.1*(500+50+X))\\+0.2*(write_h*50+ write_m*(50+500+X))$

$X=time \ for\  writing\  back\ into\ memory\ since\\ memory\ doesn't\ contain\ updated\ data$
I am totally agree with @Akash Kanase.We should solve the problem according to information given in the question.

During write operation when miss occurs in cache then the memory block should first allocated to cache and then updated to both simultaneously(as given in que write through technique is to be used) then the avg time for write should be

0.2*0.9*500 + 0.2*0.1*(500+500)

First 500 is for block allocation to cache from main memory and second 500 is for simultaneous updation.

Why this is not so!?
Sir why are main memory hit  into cache hit

Things To Know (Thin 

what is ur doubt ?? plz specify clearly


@ can you please check this 

If write back policy used then :

1)Tread: same as Write through
Tread = Hit(read)[Tcache] + Miss(read)[Tcache + Tm]


In write back if required location is in cache then we write n cache only and mark it as dirty so later when that block considered as victim for replacement it is written back to MM hence in Hit only Cache time needed

If miss occurs that required location not in cache then that block should be first bought in cache and then write to it hence time is Tcache + Tmm in case of miss

Twrite = Hit(write)[Tcache]+ Miss(write)[Tcache + Tm]

68 votes
They are asking for the average memory access time.


If nothing is mentioned whether it is a simultaneous memory access or hierarchical memory access, we need to consider it as hierarchical memory access only. Here in this question nothing is given about read operation so we need to take it as hierarchical memory access for read operation. But for write it is given as write through policy, which will result in simultaneous memory access. Also for write operations write-through cache always go to the main memory, in main memory its hit rate is 100%. Hence Hit rate for write= 1.


Since there are 80% read operations and 20% write operations,

Average memory access time =  0.8 * Time spent for read + 0.2 * Time spent for write


Time spent for read = Hit-rate-for-read * cache access time + Miss-rate-for-read(cache access time + main memory access time)

= ( 0.9 ⨯ 50 + 0.1 ⨯ (500+50) )

= 45+55 = 100


Time spend for write = 500ns (simultaneous write mentioned in question)


Average memory access time =  0.8 * 100 + 0.2 * 500 = 80+100 = 180ns
Thanks buddy
Awesome explanation
thanks brother for clearing my doubt.
That was nice explanation...cheers!!!!
easiest way to understand is basics clear
9 votes
when write through is implemented in a simultaneous access memory organization ,then hit ratio for write operation always become 1.(There is nothing given about write hit ratio)

$T_{avg\ write }$=500 ns.

$T_{avg \ read}=h_1*T_c+(1-h_1)*T_m$

                 $=.9*50 ns+.1*500 ns$


 the average time of the main memory$=.8*95ns+.2*500ns$


correct me if done any mistake.thank you

edited by

@Prateek Raghuvanshi, can you give some source that which says hit ratio of write is always one in write through with simultaneous access


@Shubhgupta they didn't mention about hit or miss ratio of write ,it doesn't make sense that simultaneous access because it always will be hit .right?

no i was asking about reasoning. See what i understood that if we are updating the cache then every update goes through main memory only so that's why hit ratio is always 1. right?

And one more thing you have used simultaneous access for read also but in question its mentioned for update only its no where mentioned that simultaneously we are reading in both cache as well as mm.
yes this answer should be the best answer.
0 votes
answer is 200ns=50+ 0.2x500x0.9(write through that are in cache)+ 0.8x0.1x500(read misses)+0.2x(500+500)x0.1(write through that are not in cache)
why 500+500 on write cache miss?
thanks for the link! But then it should be 50+500+50 as first it checks in cache, then loads(and write also) the desired block from main memory to cache. So in total, twice you are accessing the cache.
then that 50 should also be there for cache miss during read rt?

cache miss-memory read-cache update

So, 2 times cache is accessed even during read.
frankly speaking, it should be but I think for sake of brevity(or due to its miniscule nature) it is omitted
Yes. I guess another reason is there is no need for the cache update to be done for the CPU to get the data. From the main memory, data can be simultaneously passed to the CPU and updated in the cache. There is no need to wait for the cache update to be finished.


From the main memory, data can be simultaneously passed to the CPU and updated in the cache. 


Related questions

36 votes
4 answers
In an $11-bit$ computer instruction format, the size of address field is $4-bits.$ The computer uses expanding OP code technique and has $5$ two-address instructions and $32$ one-address instructions. The number of zero-address instructions it can support is ________
asked Sep 13, 2014 in CO and Architecture Kathleen 6.2k views
0 votes
0 answers
Three devices $A, B$ and $C$ are connected to the bus of a computer, input/output transfers for all three devices use interrupt control. Three interrupt request lines INTR1, INTR2 and INTR3 are available with priority of INTR1 > priority of INTR2 > priority of INTR3. Draw a ... of the priority logic, using an interrupt mask register, in which Priority of $A$ > Priority of $B$ > Priority of $C.$
asked Dec 19, 2015 in CO and Architecture Arjun 402 views
0 votes
1 answer
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: Bit-slice processors can be cascaded to get any desired word length processor speed of operation is independent of the word length configured do not contain anything equivalent of program counter in a 'normal' microprocessor Contain only the data path of a 'normal' CPU
asked Sep 13, 2014 in CO and Architecture Kathleen 808 views
15 votes
2 answers
Many microprocessors have a specified lower limit on clock frequency (apart from the maximum clock frequency limit) because _____
asked Sep 13, 2014 in CO and Architecture Kathleen 1.3k views