The Gateway to Computer Science Excellence
+34 votes
6.9k views

For inclusion to hold between two cache levels $L_1$ and $L_2$ in a multi-level cache hierarchy, which of the following are necessary?

  1. $L_1$ must be write-through cache

  2. $L_2$ must be a write-through cache

  3. The associativity of $L_2$ must be greater than that of $L_1$

  4. The $L_2$ cache must be at least as large as the $L_1$ cache

  1. IV only
  2. I and IV only
  3. I, II and IV only
  4. I, II, III and IV
in CO and Architecture by Veteran (52.1k points)
retagged by | 6.9k views
+4
Can anyone explain in simple terms what does inclusion between two caches means?
+21

Inclusion between 2 cache means : All information items are originally stored in level Mn , during the processing subsets of Mn are copied into Mn-1 , similarly subsets of Mn-1 are copied into Mn-2 and so on..

This makes statement IV correct.

Write back policy mostly used in different levels of cache due to Dirty Bit policy used by write back .That makes statement I wrong.

Reference:

https://www.philadelphia.edu.jo/academics/kaubaidy/uploads/ACA-Lect8n.pdf

Read This PPT about Inclusive and exclusive Cache Hierarchy , where in case of Inclusive cache Last level is superset of previous level .

+2
0
Anyone please summarize this
+3
I was solving this question in GEEKSFORGEEKS the solution that was given there was different , soltion:

Answer is (B), i.e., (i) and (iv) are true. Because inclusion says the L2 cache should be Superset of L1 cache. If "Write Through update" is not used (and "Write Back update" is used ) at L1 cache, then the modified data in L1 cache will not be present in L2 cache for some time unless the block in L1 cache is replaced by some other block. Hence "Write Through Update " should be used. Associativity doesn't matter. L2 cache must be at least as large as L1 cache, since all the words in L1 are also is L2

Now i am confuse which one is right geeksforgeeks or GO both seems to be correct
0
upper level memory is FASTER (i.e. L1) than lower level (i.e L2)
inclusion says : lower level should be subset of upper level
0

This one ...

0

From wiki 

In order for inclusion to hold, certain conditions need to be satisfied. L2 associativity must be greater than or equal to L1 associativity irrespective of the number of sets. The number of L2 sets must be greater than or equal to the number of L1 sets irrespective of L2 associativity. All reference information from L1 is passed to L2 so that it can update its replacement bits

 

 Value Inclusion: It is not necessary for a block to have same data values when it is cached in both higher and lower level caches even though inclusion is maintained. But, if the data values are the same, value inclusion is maintained.[1] This depends on the write policy in use, as write back policy does not notify the lower level cache of the changes made to the block in higher level cache. However, in case of write-through cache there is no such 

Link: https://en.wikipedia.org/wiki/Cache_inclusion_policy

+1

@Moin Mukhtar Write through / Write back policies are between cache and main memory and not between two caches. If you have to update other caches simultaneously Write update policy is used and to invalidate data in other caches write invalidate is used.  

0

i strongly think that B is the answer. https://en.wikipedia.org/wiki/Cache_inclusion_policy

as per the argument given in the best answer for option A, that argument is wrong. because, yes it is necessary that at any particular instant of time, lower level cache is proper subset of the higher level cache.. that is why whenever a block is replaced from the higher level cache, corresponding block is also invalidated from the lower level cache. and hence other way round, whenever a block is written in the lower level cache, it has to be written immediately in the higher level cache, which can be carried out only by using write through policy.

3 Answers

+36 votes
Best answer

1$^{st}$ is not correct as data need not to be exactly same at the same point of time and so write back policy can be used in this.

2$^{nd}$ is not needed when talking only about $L1$ and $L2$.

For 3$^{rd}$, associativity can be equal.

So, only 4$^{th}$ statement is Necessarily true - (A) choice.

by Loyal (6.9k points)
edited by
+2
@Marv could you give some reference for supporting ur reason for 1st statement?

because in many sites answer given is B.

http://www.ankurgupta.net/gate-solutions/gate2008cs/

http://cs.stackexchange.com/questions/14174/multi-level-cache-for-which-inclusion-holds
+11
see the reason is write back is used to optimise the performance but that doesn't mean it do not follow principle of inclusion...I guess the definition of principle of inclusion is that at some point of time the changes in cache should be propagated to lower level, not necessarily right away...I saw in some book I do not remember :(
+54

Inclusion says that a block in a higher level of memory must be present in all lower levels.

But it doesn't say that they must have the same content at all times. 

https://goo.gl/8XQMdH

+1
arjun sir according to inclusion block of lower level must present in all higher level of memory level but sir u r saying reverse of it..
0

What is correct definition of inclusion @Arjun sir?The one maahisingh  has written or the one you have given.If i go by memory hierarchy figure that yours version is correct i.e block in current level indicates its presence in lower level.

But some times i have read the one maahisingh has written?Please help here

+12

@rahul sharma and @reena_kandari

Statement IV is only correct , which is Option A.

Why statement I is false ? 

Most architectures that have inclusive hierarchies use a "write-back" cache.

A "write-back" cache differs from the write-through cache in that it does not require modifications in the current level of cache to be eagerly propagated to the next level of cache .

Instead, write-through caches update only the current level of cache and make the data "dirty" by using Dirty blocks. 

To know more Read the 2nd answer here :

https://stackoverflow.com/questions/21675470/cache-inclusion-property-multilevel-caching

What is Dirty Block and how they work in Write Back system , read this slide

http://www.dauniv.ac.in/downloads/CArch_PPTs/CompArchCh08L04InclusionReplacementWritebackWritethrough.pdf slide #12 , #13

@rahul sharma

What Inclusion property says ?

To simplify evicting data blocks from a level, many memory systems maintain a property called inclusion, in which the presence of an address in a given level of the memory system guarantees that the address is present in all lower levels of the memory system

Reference :[1] http://www.dauniv.ac.in/downloads/CArch_PPTs/CompArchCh08L04InclusionReplacementWritebackWritethrough.pdf

 [2] https://www.philadelphia.edu.jo/academics/kaubaidy/uploads/ACA-Lect8n.pdf

Both references strongly support statement IV .

so only option A is correct .

0
bikram sir can u plz explain definition of inclusion given by you  is this definition wrong that items or data that is in lower level must be in higher level of memory .is my interpretation wrong here?
+1

Kaluti 

Inclusion between 2 cache means : All information items are originally stored in level Mn , during the processing , subsets of Mn are copied into Mn-1 , similarly subsets of Mn-1 are copied into Mn-2 and so on..

This makes statement IV correct.

0
just think of write back cache. Are the contents of cache and main memory same all the time in write back cache ??

No right. So how can we say that data will be same.
0

@sushmita Please check top comments.

0
write through is required when there is cache coherence problem as the data must be immedeatly updated at the higher level
0
both write through and write back are strategy to maintain cache coherence problem cache may be write through or write back
+21 votes

Cache levels differs only because their access times are different. If $L_1$ is made write through then it implies that on every write operation it will take the same time that $L_2$ cache takes to write. so this will erase the difference between $L_1$ and $L_2$ and to use them at different levels will be implied as meaningless. So, statement I cannot be true always.

statement II asks that $L_2$ is always write through, this is not true for every case we can have a Write Back cache in many cases. So, this is also false.

statement III talks nothing meaningful. as assoc. has nothing to do here.

statement IV : for Inclusion property to hold it is required that $L_1$ is a subset of $L_2$ cache. 

Hence, answer = option A

by Boss (30.6k points)
edited by
+2
For 1, it can be useful when read operations are predominant.

Also, associativity cannot be small in upper levels for inclusion.
+2

I was expecting this. But I choose to ignore that read can show dominance. Coz for Access Time we need to get what is the worst case of getting access to a cache, this imply $\max(T_{read},T_{write})$.

+ statement I is false coz, $L_1$ cache can also be a Write Back/Write Around cache than just and only be write back always. so its definitely False.

+1
1 is anyway false. But we consider worst case for real time systems or in time constraint systems, In normal systems, average case is considered.
+1

Average Access Time will be considered for a process(es) which is already under execution(or have executed), which will take in parameters such as fraction of time read/write performed by it. Using that avg. access time for it will be calculated.

but if we dissociate a cache from any process & taking up number of times read and write cycles executed then to calculate access time of cache we should consider equal fractions of read and write cycles. To get access time of that cache. and that time need not be prefixed with the word average.

+1
@Arjun sir what is the final answer?
+1
option A

IV th statement .
+1
Associativity  here means the same thing like p-way associative in set associative?I mean how many lines a set can have or is it something diffferent?
+2
@Arjun sir,

"Also, associativity cannot be small in upper levels for inclusion."

PLease explain this a bit
+1
@Arjun sir please answer the above question.
@Rahul Did you get the reason?
+1
I am not sure though but what i think is that the L1 cache cache should have more associative because it will be accessed more frequently as compared to L2.. But as we know parallel search in set associative is costly thats why we keep lower level cache L2 and so on with low associative but with more space..
+1
In set associative, once we know the set number  we check the line within that set right ? So less the associativity, faster will be the search within the set. As L1 cache should be faster than L2 cache, it should have low associativity according to me.

What do you think?
+2
But less associative means more conflict misses.If i have 2 way associative and on other side  i have 10 way assosiative.Both can give me same accesses time as search is done in parallel.But more associative will be more complex and costly to implement the parallel search.
+1

 If L1 is made write through then it implies that on every write operation it will take the same time that L2 cache takes to write. so this will erase the difference between L1 and L2 and to use them at different levels will be implied as meaningless. 

please explain..why it will take same time..?

+1
L1 is write through cache, so, whatever changes are made at this level should immediately reflect at L2 level. That is, if a write is made at L1 then immediately a write at L2 should be made. Now, when we calculate the time taken for the write operation, we must consider maximum of the two. Obviously, time to write at L2 level would be greater and would be considered in this case.

Now, this entire process becomes useless. Why did create 2 levels of cache when write operation takes same amount of time in both cases! According to me, this is the reason.

Please let me know if I missed out something!
0

@Arjun sir in your comment

Also, associativity cannot be small in upper levels for inclusion.

Suppose L1 cache is 2-way set associative with 4 sets and let's say L2 cache is direct mapped with 16 cache lines and assume that both caches have same block size. Then I think it will follow inclusion property even though L1(lower level) has higher associativity than L2 (upper level).

0
No. Because in a direct mapped one, on second unique access to the same index can cause a replacement but with 2-way set, this can happen only on 3rd unique access to an index.
0

Sir the above photo is the configuration in above comment. let memory be accessed in sequence 0, 1, 2 ... 8. Sir, I think it is following Inclusion, even though L1(lower level) has higher associativity than L2 (upper level).

Can you please explain how this is not showing Inclusion.

0
sir, what does the associativity means here?
+1
associativity here is referring to number of lines in a set of a cache.
+1 vote

Why option 1 and 2 are correct

If the the caches are not write through, we can have a imbalance between the LRU counter in the L1 and L2 caches. leading to a situation where the inclusion property will fail eventually. Hence every HIT in L1, must propogate to L2 also, in order to maintain consistency in the LRU counter.

The video given below displays a good example of this problem, when write through cache are not used.

https://www.youtube.com/watch?v=J8DQG9Pvp3U

Why option 4 is also correct

The size of L1 and L2 caches can be equal, or L2 can be greater. 

case 1 (If they are equal or L2 is greater): This part is quiet self explanatory, in this case we are sure to have atleast as many entries in L1 as L2, also L2 can have extra lines which does not affect our inclusion property.

case 2 (If L1 is greater than L2): Lets say L1=10 lines and L2=2 lines, if block 0, 1 are accessed, they lead to compulsary misses and are now available in L1 as well as L2.

Now lets say we access 3, 

in L2 cache, we must have it replace atleast one out of 0, 1, and due to greater size of L1 we may not necessarily map 3 to existing position, when this occurs we will have violated our inclusion property.

Hence I believe the answer to be (c)

Hello Everyone, this is my very first post, so i'm sorry If my explanation is not explained as well as it should be.

by (21 points)
0
The video which is shown is for set associative cache but in direct mapped cache we don't use any replacement algo. The block to be replaced is fixed.

So inclusion property can be followed if L1 , L2 is direct mapped cache and writeback policy is used.

Hence L1 must be a writethrough cache is not a necessary condition for inclusion.

Related questions

Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true
50,650 questions
56,228 answers
194,225 comments
95,681 users