6.7k views

In designing a computer's cache system, the cache block (or cache line) size is an important parameter. Which one of the following statements is correct in this context?

1. A smaller block size implies better spatial locality
2. A smaller block size implies a smaller cache tag and hence lower cache tag overhead
3. A smaller block size implies a larger cache tag and hence lower cache hit time
4. A smaller block size incurs a lower cache miss penalty

retagged | 6.7k views
0
in last option i.e. d) in that it is cache miss PENALTY and not just cache miss!!
I missed that penalty word and was really confused why the ans. is d) i hope others will not now (if any)!
0
Yes!

1. A smaller block size means during a memory access only a smaller part of near by addresses are brought to cache- meaning spatial locality is reduced.

2. A smaller block size means more number of blocks (assuming cache size constant) and hence index bits go up and offset bits go down. But the tag bits remain the same.

3. A smaller block size implying larger cache tag is true, but this can't lower cache hit time in any way.

4. A smaller block size incurs a lower cache miss penalty. This is because during a cache miss, an entire cache block is fetched from next lower level of memory. So, a smaller block size means only a smaller amount of data needs to be fetched and hence reduces the miss penalty (Cache block size can go til the size of data bus to the next level of memory, and beyond this only increasing the cache block size increases the cache miss penalty).

Correct Answer: $D$

by Veteran (431k points)
edited
+13

"A smaller block size implying larger cache tag is true" why is that true?? Because if there are more no of blocks then we need more no of bits for cache indexing means less bits as tag bits. and how a smaller block size can impact cache miss. in fact if block size is bigger we can bring more data into this block, delay will be bit more but lower miss rate.

http://faculty.cse.tamu.edu/djimenez/614/lecture8.html

+80

Let the initial cache size be say 8 KB. If the cache block size is say 64 B, we will have 128 cache entries. So, we need 7 index bits.

If cache block size is say 32 B, we will need 256 cache entries and 8 index bits.

Now, how many tag bits are required depends on the size of virtual address space. Supposing we have 232 bytes of address space.
In the first case, number of tag bits will be 32 - 7 - 6  (6 for 64B of cache block) = 19.
In the second case, number of tag bits will be 32 - 8 - 5 = 19.
So, reducing the cache block size doesn't affect the amount of tag bits. But we need tag bits for each cache block. So, in the first case, total memory needed for tag = 19 * 128 bits while in the second case, it will be 19 * 256 bits.

And smaller cache block size reduces "miss overhead". Not "miss rate". This is because on a miss, less amount of data needs to be taken from the lower level. (But if sufficient bandwidth is there, this won't be a problem for larger block sizes).

0
awesome explanation @Arjun :)
0
Awesome explanation @Arjun sir
+22

LongBlock size:

• Programs with high spatial locality will do well, but programs with poor spatial locality will not, If we increase (Good for spatial locality)
• Takes more time to bring from next level of memory hierarchy (Increase miss penalty)
• If we increase Block size to a limit, Then we are overlooking temporal locality, so compromise in temporal locality will lead to increase in miss rate.

+3
+1
Temporal locality needs whole block in cache that is being used for each data item ,so if you increase the block size less than the block size which is required for each data item ,then few blocks (which was earlier in larger block size) may have left out in MM and leads to miss penalty.Why miss rate is rising , so ans is there is also increase in block size so there will little rise in miss rate as well.
0

If Option C is modified as follows:

A smaller block size implies a larger cache tag and hence higher cache hit rate.

Then I believe that this is a true statement ? Yes or No ?

+1
By the larger cache tag and smaller cache tag means number of tag bits or tag memory size?
+5
I found b and c explanation a bit confusing.

(B) A smaller block size means more number of blocks (assuming cache size constant) and hence index bits go up and offset bits go down. But the tag bits remain the same.

(C) A smaller block size implying larger cache tag is true, but this can't lower cache hit time in any way.

@Arjun sir,Can you please clear,as you mention in b that tag bit remain same,then it should remain same also in otpion c?But in c,you are saying that smaller block means larger cache is true?

Are we talking about number of TAG bits in the address format ot the total size of TAG directory here?
+7

@Arjun Sir,   "A smaller block size implies a larger cache tag"
Nice explanation.

but it means,  tag bits are same but tag directory size will be increased.
But in the question they have written "larger cache tag"

So Cache tag size means tag directory size?

+2
I am having the same doubt. Tag  size in question means tag directory size or number of tag bits in address?
0
Incase of fully associative cache, decreasing the block size will increase the number of tag bits right ?

Assume blocksize is halved, then number of blocks is doubled ...

If the number of blocks is doubled, then TAG bits are doubled since it is fully associative cache

Right ? Please correct me if am wrong ...
0
Reducing block size also means we need to move more blocks in cache than earlier for same amount of data, means more number of accesses in case of cache miss. Doesnt this translates to increased cache miss penalty? Why reduced block size **always** reduces cache miss penalty?
0
great explanation @Arjun sir
0
@srestha,Please may I know one thing Srestha.If there is a larger block size of Cache,will it not accommodate larger data as compared to a cache with a smaller block size?Here, the no of tags required will be less.But in case of cache with smaller block size and more no of tags there will be more overhead?Please correct me if wrong.
+1
Although, exploiting a spatial locality doesn't always lead to a lower miss rate, As we have seen if we access 2d array column wise and it is originally stored in memory in row order.
0
Can anyone explain the third point that @sachin Mittal said about temporal locality

How temporal locality will lead to increase in miss rate.?
0
hence index bits go up and offset bits go down. But the tag bits remain the same. ?how tag bit remain same?
+1

@Prince Sindhiya   Third point of @sachin sir is

Initially say our block size is very small then we can accommodate more no. of blocks in cache this implies we can store more number of distinct accesses hence high temporal locality ( because if they will be accessed in near future then there are more chances of them being a hit). But this case has very low spatial locality as we have brought in lesser number of words due to small block size this implies if words near our previous accesses are referenced then there is a high chance of it being a miss. As we consider an average process execution scenario where both temporal and spatial accesses are equally likely, so in this case number of spatial access misses will be way higher due to which net miss rate will be higher.

Now, if say we gradually increase the block size then we are reducing the number of blocks in the cache which means little lower temporal locality but on the other hand, we are getting higher spatial locality so net miss rate will reduce. So, we can see that a trade-off is going on between temporal and spatial locality. Now, increasing block size further will at some point land us on min. point of the miss rate curve (this is actually the trade-off point).

But if we further increase block size then temporal locality is very small while spatial is more, so again miss rate starts to rise.

+1
Reducing the block size could lead to increase in miss rate because of spatial locality as less number of bytes will be transferred to the block. So decreasing block size decreases miss penalty and increases miss rate .
0

@Arjun-Sir can we say that smaller block size, will increase tag bits, hence width of tag comparator will increase.So, the cache hit time will surely increase and never decrease for option (C).

0

@Ayush Upadhyaya "Surely increase" may not be true as tag comparison happens in parallel.

0

@Arjun Sir,

Can you please clear,as you mention in option(b) that tag bit remain same,then it should remain same also in option(c)?But in (c),you are saying that smaller block means larger cache is true?

0

@Arjun sir,

Can you please give the definition of miss penalty? little bit confusion with definition.

0

I found b and c explanation a bit confusing.

(B) A smaller block size means more number of blocks (assuming cache size constant) and hence index bits go up and offset bits go down. But the tag bits remain the same.

(C) A smaller block size implying larger cache tag is true, but this can't lower cache hit time in any way.

arjun sir mentioned in option b that tag bit remain same,then it should remain same also in option c ? But in option c,he's saying that "smaller block means larger cache" is true?

Are we talking about number of TAG bits in the address format ot the total size of TAG directory here?

0

@pranay562

Tag + index + offset = 3+4+5 = 12  i.e. their sum is constant.

Option B

smaller block size $\implies$ less bits for offset.

so we can either increase  tag bits or offset bits. So option B will not hold true for all the cases.

like we can have 3 + 5 + 4 =12 ( Tag bits are still same)

Option C

smaller block size $\implies$ less bits for offset.

so we can either increase  tag bits or offset bits. So option C will not hold true for all the cases.

like we can have 4 + 4 + 4 =12 ( Tag bits are increased).

0

@pranay562 he edited

Block : The memory is divided into equal size segments. Each segment is called a block. Data in cache is retrieved in form of blocks. The idea is to use Spatial Locality (Once a location is retrieved, it is highly probable that the nearby locations would be retrieved in near future).

TAG bits : Each cache block is given a set of TAG bits to identify which main memory block is present in that cache block.

Option A : If the block size is small, there would be less number of near-by address for future references by CPU to be present into that block. Hence this is not better spatial locality.

Option B : If the block size is smaller, no of blocks would be more in cache, hence more cache tag bits would be needed, not less.

Option C : Cache tag bits are more ( because more no of blocks due to smaller block size ), but more cache tag bits can't lower the hit time ( even it will increase ).

Option D : If there is a miss at cache memory ( i.e. the needed block by the CPU is not present in the cache memory ), then that block has to be moved from next lower level of memory ( lets say main memory ) in the memory hierarchy, and if the block size is lower, then it takes less time to be placed into cache memory, hence less miss penalty. Hence option D.
by Loyal (9.9k points)
edited
0

what is miss penalty..??

no. of times miss occur or time taken to placed a block from lower level to cache memory...??

as u said " if the block size is lower, then it takes less time to be placed into cache memory, hence less miss penalty

plzzz explain more... i m little bit confused now :(

D option makes perfect sense as there is no relation between tagbits and size of block/lines in caching
by (71 points)

A) This is wrong. Spatial locality reduces on reducing block size.

B) Tag bit do not change on changing block size, only cache tag overhead will increase on small block size.

(Suppose we have address space of 20bits, 16KB cache size , 16Byte block size then offset bit 4, line bit 10(Directed mapped), tag bit 6. Now keep everything same and change block size to 64Byte then offset bit 4, line bit 8 but tag bit will be same 10.)

C) same explanation as option B. But no effect on cache hit time because searching in cache is parallel execution (in all mapping technique.) all comparators and multiplexers work in parallel.

D) The answer is D. Suppose there is a data bus between L1 & L2 cache of size 8 word and L1 cache size=4word L2 cache size=16word. If a miss occurred in L1 cache only 4 word has to be transferred from L2 cache to L1 cache and this will take half second to transfer but if we increase size of L1 cache to 8Word then it will take more time.

by Active (4.3k points)