The Gateway to Computer Science Excellence
First time here? Checkout the FAQ!
x
+29 votes
4k views

In designing a computer's cache system, the cache block (or cache line) size is an important parameter. Which one of the following statements is correct in this context?

  1. A smaller block size implies better spatial locality
  2. A smaller block size implies a smaller cache tag and hence lower cache tag overhead
  3. A smaller block size implies a larger cache tag and hence lower cache hit time
  4. A smaller block size incurs a lower cache miss penalty
asked in CO & Architecture by Veteran (101k points)
retagged by | 4k views

3 Answers

+52 votes
Best answer
  1. A smaller block size means during a memory access only a smaller part of near by addresses are brought to cache- meaning spatial locality is reduced.
     
  2. A smaller block size means more number of blocks (assuming cache size constant) and hence index bits go up and offset bits go down. But the tag bits remain the same.
     
  3. A smaller block size implying larger cache tag is true, but this can't lower cache hit time in any way.
     
  4. A smaller block size incurs a lower cache miss penalty. This is because during a cache miss, an entire cache block is fetched from next lower level of memory. So, a smaller block size means only a smaller amount of data needs to be fetched and hence reduces the miss penalty (Cache block size can go til the size of data bus to the next level of memory, and beyond this only increasing the cache block size increases the cache miss penalty).
answered by Veteran (357k points)
edited by
+9

"A smaller block size implying larger cache tag is true" why is that true?? Because if there are more no of blocks then we need more no of bits for cache indexing means less bits as tag bits. and how a smaller block size can impact cache miss. in fact if block size is bigger we can bring more data into this block, delay will be bit more but lower miss rate.

please help, thanks!
http://faculty.cse.tamu.edu/djimenez/614/lecture8.html

+55

Let the initial cache size be say 8 KB. If the cache block size is say 64 B, we will have 128 cache entries. So, we need 7 index bits.

If cache block size is say 32 B, we will need 256 cache entries and 8 index bits.

Now, how many tag bits are required depends on the size of virtual address space. Supposing we have 232 bytes of address space.
In the first case, number of tag bits will be 32 - 7 - 6  (6 for 64B of cache block) = 19. 
In the second case, number of tag bits will be 32 - 8 - 5 = 19.
So, reducing the cache block size doesn't affect the amount of tag bits. But we need tag bits for each cache block. So, in the first case, total memory needed for tag = 19 * 128 bits while in the second case, it will be 19 * 256 bits. 

And smaller cache block size reduces "miss overhead". Not "miss rate". This is because on a miss, less amount of data needs to be taken from the lower level. (But if sufficient bandwidth is there, this won't be a problem for larger block sizes).

0
awesome explanation @Arjun :)
0
Awesome explanation @Arjun sir
+12

LongBlock size:

  • Programs with high spatial locality will do well, but programs with poor spatial locality will not, If we increase (Good for spatial locality)
  • Takes more time to bring from next level of memory hierarchy (Increase miss penalty)
  • If we increase Block size to a limit, Then we are overlooking temporal locality, so compromise in temporal locality will lead to increase in miss rate.

0
Temporal locality needs whole block in cache that is being used for each data item ,so if you increase the block size less than the block size which is required for each data item ,then few blocks (which was earlier in larger block size) may have left out in MM and leads to miss penalty.Why miss rate is rising , so ans is there is also increase in block size so there will little rise in miss rate as well.
0

If Option C is modified as follows:

A smaller block size implies a larger cache tag and hence higher cache hit rate.

Then I believe that this is a true statement ? Yes or No ?

0
By the larger cache tag and smaller cache tag means number of tag bits or tag memory size?
+3
I found b and c explanation a bit confusing.

(B) A smaller block size means more number of blocks (assuming cache size constant) and hence index bits go up and offset bits go down. But the tag bits remain the same.

(C) A smaller block size implying larger cache tag is true, but this can't lower cache hit time in any way.

 

@Arjun sir,Can you please clear,as you mention in b that tag bit remain same,then it should remain same also in otpion c?But in c,you are saying that smaller block means larger cache is true?

Are we talking about number of TAG bits in the address format ot the total size of TAG directory here?
+2

@Arjun Sir,   "A smaller block size implies a larger cache tag"
Nice explanation.

but it means,  tag bits are same but tag directory size will be increased.
But in the question they have written "larger cache tag"

So Cache tag size means tag directory size?

0
I am having the same doubt. Tag  size in question means tag directory size or number of tag bits in address?
0
Incase of fully associative cache, decreasing the block size will increase the number of tag bits right ?

Assume blocksize is halved, then number of blocks is doubled ...

If the number of blocks is doubled, then TAG bits are doubled since it is fully associative cache

Right ? Please correct me if am wrong ...
0
Reducing block size also means we need to move more blocks in cache than earlier for same amount of data, means more number of accesses in case of cache miss. Doesnt this translates to increased cache miss penalty? Why reduced block size **always** reduces cache miss penalty?
0
great explanation @Arjun sir
0
@srestha,Please may I know one thing Srestha.If there is a larger block size of Cache,will it not accommodate larger data as compared to a cache with a smaller block size?Here, the no of tags required will be less.But in case of cache with smaller block size and more no of tags there will be more overhead?Please correct me if wrong.
0
Although, exploiting a spatial locality doesn't always lead to a lower miss rate, As we have seen if we access 2d array column wise and it is originally stored in memory in row order.
0
Can anyone explain the third point that @sachin Mittal said about temporal locality

How temporal locality will lead to increase in miss rate.?
+6 votes
D option makes perfect sense as there is no relation between tagbits and size of block/lines in caching
answered by (71 points)
+6 votes
Block : The memory is divided into equal size segments. Each segment is called a block. Data in cache is retrieved in form of blocks. The idea is to use Spatial Locality (Once a location is retrieved, it is highly probable that the nearby locations would be retrieved in near future).

TAG bits : Each cache block is given a set of TAG bits to identify which main memory block is present in that cache block.

Option A : If the block size is small, there would be less number of near-by address for future references by CPU to be present into that block. Hence this is not better spatial locality.

Option B : If the block size is smaller, no of blocks would be more in cache, hence more cache tag bits would be needed, not less.

Option C : Cache tag bits are more ( because more no of blocks due to smaller block size ), but more cache tag bits can't lower the hit time ( even it will increase ).

Option D : If there is a miss at cache memory ( i.e. the needed block by the CPU is not present in the cache memory ), then that block has to be moved from next lower level of memory ( lets say main memory ) in the memory hierarchy, and if the block size is lower, then it takes less time to be placed into cache memory, hence less miss penalty. Hence option D.
answered by Loyal (8.4k points)
edited by
0

@  Regina Phalange

what is miss penalty..??

no. of times miss occur or time taken to placed a block from lower level to cache memory...??

as u said " if the block size is lower, then it takes less time to be placed into cache memory, hence less miss penalty

plzzz explain more... i m little bit confused now :( 



Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true

39,847 questions
46,813 answers
141,146 comments
59,059 users