The Gateway to Computer Science Excellence
First time here? Checkout the FAQ!
x
+27 votes
4.6k views

For inclusion to hold between two cache levels $L_1$ and $L_2$ in a multi-level cache hierarchy, which of the following are necessary?

  1. $L_1$ must be write-through cache

  2. $L_2$ must be a write-through cache

  3. The associativity of $L_2$ must be greater than that of $L_1$

  4. The $L_2$ cache must be at least as large as the $L_1$ cache

  1. IV only
  2. I and IV only
  3. I, II and IV only
  4. I, II, III and IV
asked in CO & Architecture by Veteran (59.6k points)
retagged by | 4.6k views
+4
Can anyone explain in simple terms what does inclusion between two caches means?
+14

Inclusion between 2 cache means : All information items are originally stored in level Mn , during the processing subsets of Mn are copied into Mn-1 , similarly subsets of Mn-1 are copied into Mn-2 and so on..

This makes statement IV correct.

Write back policy mostly used in different levels of cache due to Dirty Bit policy used by write back .That makes statement I wrong.

Reference:

https://www.philadelphia.edu.jo/academics/kaubaidy/uploads/ACA-Lect8n.pdf

Read This PPT about Inclusive and exclusive Cache Hierarchy , where in case of Inclusive cache Last level is superset of previous level .

+1
0
Anyone please summarize this
+2
I was solving this question in GEEKSFORGEEKS the solution that was given there was different , soltion:

Answer is (B), i.e., (i) and (iv) are true. Because inclusion says the L2 cache should be Superset of L1 cache. If "Write Through update" is not used (and "Write Back update" is used ) at L1 cache, then the modified data in L1 cache will not be present in L2 cache for some time unless the block in L1 cache is replaced by some other block. Hence "Write Through Update " should be used. Associativity doesn't matter. L2 cache must be at least as large as L1 cache, since all the words in L1 are also is L2

Now i am confuse which one is right geeksforgeeks or GO both seems to be correct
0
upper level memory is FASTER (i.e. L1) than lower level (i.e L2)
inclusion says : lower level should be subset of upper level
0

This one ...

2 Answers

+26 votes
Best answer

1$^{st}$ is not correct as data need not to be exactly same at the same point of time and so write back policy can be used in this.

2$^{nd}$ is not needed when talking only about $L1$ and $L2$.

For 3$^{rd}$, associativity can be equal.

So, only 4$^{th}$ statement is Necessarily true - (A) choice.

answered by Loyal (6.9k points)
edited by
+1
@Marv could you give some reference for supporting ur reason for 1st statement?

because in many sites answer given is B.

http://www.ankurgupta.net/gate-solutions/gate2008cs/

http://cs.stackexchange.com/questions/14174/multi-level-cache-for-which-inclusion-holds
+10
see the reason is write back is used to optimise the performance but that doesn't mean it do not follow principle of inclusion...I guess the definition of principle of inclusion is that at some point of time the changes in cache should be propagated to lower level, not necessarily right away...I saw in some book I do not remember :(
+37

Inclusion says that a block in a higher level of memory must be present in all lower levels.

But it doesn't say that they must have the same content at all times. 

https://goo.gl/8XQMdH

+2
arjun sir according to inclusion block of lower level must present in all higher level of memory level but sir u r saying reverse of it..
–1

What is correct definition of inclusion @Arjun sir?The one maahisingh  has written or the one you have given.If i go by memory hierarchy figure that yours version is correct i.e block in current level indicates its presence in lower level.

But some times i have read the one maahisingh has written?Please help here

+9

@rahul sharma and @reena_kandari

Statement IV is only correct , which is Option A.

Why statement I is false ? 

Most architectures that have inclusive hierarchies use a "write-back" cache.

A "write-back" cache differs from the write-through cache in that it does not require modifications in the current level of cache to be eagerly propagated to the next level of cache .

Instead, write-through caches update only the current level of cache and make the data "dirty" by using Dirty blocks. 

To know more Read the 2nd answer here :

https://stackoverflow.com/questions/21675470/cache-inclusion-property-multilevel-caching

What is Dirty Block and how they work in Write Back system , read this slide

http://www.dauniv.ac.in/downloads/CArch_PPTs/CompArchCh08L04InclusionReplacementWritebackWritethrough.pdf slide #12 , #13

@rahul sharma

What Inclusion property says ?

To simplify evicting data blocks from a level, many memory systems maintain a property called inclusion, in which the presence of an address in a given level of the memory system guarantees that the address is present in all lower levels of the memory system

Reference :[1] http://www.dauniv.ac.in/downloads/CArch_PPTs/CompArchCh08L04InclusionReplacementWritebackWritethrough.pdf

 [2] https://www.philadelphia.edu.jo/academics/kaubaidy/uploads/ACA-Lect8n.pdf

Both references strongly support statement IV .

so only option A is correct .

0
bikram sir can u plz explain definition of inclusion given by you  is this definition wrong that items or data that is in lower level must be in higher level of memory .is my interpretation wrong here?
+1

Kaluti 

Inclusion between 2 cache means : All information items are originally stored in level Mn , during the processing , subsets of Mn are copied into Mn-1 , similarly subsets of Mn-1 are copied into Mn-2 and so on..

This makes statement IV correct.

0
just think of write back cache. Are the contents of cache and main memory same all the time in write back cache ??

No right. So how can we say that data will be same.
0

@sushmita Please check top comments.

0
write through is required when there is cache coherence problem as the data must be immedeatly updated at the higher level
0
both write through and write back are strategy to maintain cache coherence problem cache may be write through or write back
+15 votes

Cache levels differs only because their access times are different. If $L_1$ is made write through then it implies that on every write operation it will take the same time that $L_2$ cache takes to write. so this will erase the difference between $L_1$ and $L_2$ and to use them at different levels will be implied as meaningless. So, statement I cannot be true always.

statement II asks that $L_2$ is always write through, this is not true for every case we can have a Write Back cache in many cases. So, this is also false.

statement III talks nothing meaningful. as assoc. has nothing to do here.

statement IV : for Inclusion property to hold it is required that $L_1$ is a subset of $L_2$ cache. 

Hence, answer = option A

answered by Boss (30.8k points)
edited by
+2
For 1, it can be useful when read operations are predominant.

Also, associativity cannot be small in upper levels for inclusion.
+1

I was expecting this. But I choose to ignore that read can show dominance. Coz for Access Time we need to get what is the worst case of getting access to a cache, this imply $\max(T_{read},T_{write})$.

+ statement I is false coz, $L_1$ cache can also be a Write Back/Write Around cache than just and only be write back always. so its definitely False.

+1
1 is anyway false. But we consider worst case for real time systems or in time constraint systems, In normal systems, average case is considered.
+1

Average Access Time will be considered for a process(es) which is already under execution(or have executed), which will take in parameters such as fraction of time read/write performed by it. Using that avg. access time for it will be calculated.

but if we dissociate a cache from any process & taking up number of times read and write cycles executed then to calculate access time of cache we should consider equal fractions of read and write cycles. To get access time of that cache. and that time need not be prefixed with the word average.

+1
@Arjun sir what is the final answer?
+1
option A

IV th statement .
+1
Associativity  here means the same thing like p-way associative in set associative?I mean how many lines a set can have or is it something diffferent?
+2
@Arjun sir,

"Also, associativity cannot be small in upper levels for inclusion."

PLease explain this a bit
+1
@Arjun sir please answer the above question.
@Rahul Did you get the reason?
+1
I am not sure though but what i think is that the L1 cache cache should have more associative because it will be accessed more frequently as compared to L2.. But as we know parallel search in set associative is costly thats why we keep lower level cache L2 and so on with low associative but with more space..
+1
In set associative, once we know the set number  we check the line within that set right ? So less the associativity, faster will be the search within the set. As L1 cache should be faster than L2 cache, it should have low associativity according to me.

What do you think?
+2
But less associative means more conflict misses.If i have 2 way associative and on other side  i have 10 way assosiative.Both can give me same accesses time as search is done in parallel.But more associative will be more complex and costly to implement the parallel search.
+1

 If L1 is made write through then it implies that on every write operation it will take the same time that L2 cache takes to write. so this will erase the difference between L1 and L2 and to use them at different levels will be implied as meaningless. 

please explain..why it will take same time..?

+1
L1 is write through cache, so, whatever changes are made at this level should immediately reflect at L2 level. That is, if a write is made at L1 then immediately a write at L2 should be made. Now, when we calculate the time taken for the write operation, we must consider maximum of the two. Obviously, time to write at L2 level would be greater and would be considered in this case.

Now, this entire process becomes useless. Why did create 2 levels of cache when write operation takes same amount of time in both cases! According to me, this is the reason.

Please let me know if I missed out something!
0

@Arjun sir in your comment

Also, associativity cannot be small in upper levels for inclusion.

Suppose L1 cache is 2-way set associative with 4 sets and let's say L2 cache is direct mapped with 16 cache lines and assume that both caches have same block size. Then I think it will follow inclusion property even though L1(lower level) has higher associativity than L2 (upper level).

0
No. Because in a direct mapped one, on second unique access to the same index can cause a replacement but with 2-way set, this can happen only on 3rd unique access to an index.
0

Sir the above photo is the configuration in above comment. let memory be accessed in sequence 0, 1, 2 ... 8. Sir, I think it is following Inclusion, even though L1(lower level) has higher associativity than L2 (upper level).

Can you please explain how this is not showing Inclusion.

Answer:

Related questions



Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true

42,556 questions
48,547 answers
155,290 comments
63,498 users