The Gateway to Computer Science Excellence
First time here? Checkout the FAQ!
+18 votes
The access times of the main memory and the Cache memory, in a computer system, are $500$ $n sec$ and $50$ $nsec$, respectively. It is estimated that $80$% of the main memory request are for read the rest for write. The hit ratio for the read access only is $0.9$ and a write-through policy (where both main and cache memories are updated simultaneously) is used. Determine the average time of the main memory.
asked in CO & Architecture by Veteran (59.5k points)
edited by | 2.7k views

4 Answers

+21 votes
Best answer

Average memory access time =  Time spend for read + Time spend for write

= Read time when cache hit + Read time when cache miss + Write time when cache hit + Write time when cache miss

$= 0.8 ⨯ 0.9 ⨯ 50 + 0.8 ⨯ 0.1 ⨯ (500+50) $ (assuming hierarchical read from memory and cache as only simultaneous write is mentioned in question) $+ 0.2 ⨯ 0.9 ⨯ 500 + 0.2 ⨯ 0.1 ⨯ 500 $ (simultaneous write mentioned in question)

$= 36 + 44 + 90 + 10 = 180 ns$


answered by Veteran (352k points)
edited by
what will be the ans if write-through policy is not mentioned in the question?
Why only 500 ns is consider during write are we not considering cache access during write?
We consider simultaneous access in write operation of write through cache ONLY
Arjun sir what would be the answer in write back??
Simultaneous write will happen for sure in write through,but why are we considering simultaneous memory organization for write?
@Arjun Sir,   "The hit ratio for the read access only is 0.9"    For write operation, the hit ratio is not given, how 0.9 is taken for write too ?

Why are you taking "hit ratio for read access" in cache write? You should've given the reason for doing so. The answer written by sameer2009 looks more clear and understandable.

Practically is it possible to have a cache design that supports hierarchical reads and simultaneous writes?
@ Sushmita In case of write-back policy, we have to consider the time for writing back when a new data word comes and to accommodate it we need to replace one existing data word in the event of miss.
That is,

$ T_{avg} = 0.8*(0.9*50+0.1*(500+50+X))\\+0.2*(write_h*50+ write_m*(50+500+X))$

$X=time \ for\  writing\  back\ into\ memory\ since\\ memory\ doesn't\ contain\ updated\ data$
I am totally agree with @Akash Kanase.We should solve the problem according to information given in the question.
+9 votes
They are asking for the average memory access time.


If nothing is mentioned whether it is a simultaneous memory access or hierarchical memory access, we need to consider it as hierarchical memory access only. Here in this question nothing is given about read operation so we need to take it as hierarchical memory access for read operation. But for write it is given as write through policy, which will result in simultaneous memory access. Also for write operations write-through cache always go to the main memory, in main memory its hit rate is 100%. Hence Hit rate for write= 1.


Since there are 80% read operations and 20% write operations,

Average memory access time =  0.8 * Time spent for read + 0.2 * Time spent for write


Time spent for read = Hit-rate-for-read * cache access time + Miss-rate-for-read(cache access time + main memory access time)

= ( 0.9 ⨯ 50 + 0.1 ⨯ (500+50) )

= 45+55 = 100


Time spend for write = 500ns (simultaneous write mentioned in question)


Average memory access time =  0.8 * 100 + 0.2 * 500 = 80+100 = 180ns
answered by Junior (551 points)
0 votes
answer is 200ns=50+ 0.2x500x0.9(write through that are in cache)+ 0.8x0.1x500(read misses)+0.2x(500+500)x0.1(write through that are not in cache)
answered by Loyal (6.8k points)
why 500+500 on write cache miss?
thanks for the link! But then it should be 50+500+50 as first it checks in cache, then loads(and write also) the desired block from main memory to cache. So in total, twice you are accessing the cache.
then that 50 should also be there for cache miss during read rt?

cache miss-memory read-cache update

So, 2 times cache is accessed even during read.
frankly speaking, it should be but I think for sake of brevity(or due to its miniscule nature) it is omitted
Yes. I guess another reason is there is no need for the cache update to be done for the CPU to get the data. From the main memory, data can be simultaneously passed to the CPU and updated in the cache. There is no need to wait for the cache update to be finished.


From the main memory, data can be simultaneously passed to the CPU and updated in the cache. 

0 votes

Default case(Simultaneous access):-

answered by (11 points)

Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true

36,992 questions
44,564 answers
43,627 users