4.4k views

More than one word are put in one cache block to:

1. exploit the temporal locality of reference in a program
2. exploit the spatial locality of reference in a program
3. reduce the miss penalty
4. none of the above

edited | 4.4k views
+1

Exploit the spatial locality of reference in a program as, if the next locality is addressed immediately, it will already be in the cache.

Consider the scenario similar to cooking, where when an ingredient is taken from cupboard, you also take the near by ingredients along with it- hoping that they will be needed in near future.

Correct Answer: $B$
by Veteran (420k points)
edited
+9
We go to school to study which helps us in getting a job. Now if someone asks "why do you go to school?" should we say "to study" or "to get a job"?
+1
awesome (Y) example
0
Very nice analogy..:)
0

Can we say like this:-

More than one word are put in one cache block will reduce the miss ratio, but on the other hand it will also increase the miss penalty.

ryt??

0
Yes shubhanshu incresing cache size increase miss panality but reduce miss ratio.
+1
Thanks Anu!!

Increasing cache size decrease miss ratio is sure, but it will affect miss penalty is not sure in case of simultaneous access because it depends on the size of next level memory. If that memory is large then the searching time will be more, hence miss penalty will be more.
0
@dhruv   it will reduce miss rate  . miss penality will depend on other factors like block size.
0
The purpose of block offset in cache mapping is to insure spatial locality ....
0
Suppose if we knew what words (assume these are at random locations) to put into cache, then we can reduce miss penalty. In this case, we neither use spatial nor temporal locality. Is this right?
+1
why not temporal locality?:

exploit the temporal locality of reference in a program as, if the cache manager is intelligent enough to predict that time to time a particular conditional jump result in fetching of jump target instructions stored far apart from sequential access and thus caching it

Consider I am preparing food. I know after every 20 minutes I need to have a sip of a pepsi and pepsi can is stored far away in fridge away from cooking incgridients. So keep in near me on kitchen desk. (ignore that pepsi will loose its chillness ;p )
When a word is accessed, we take a block of word and put in cache expecting words in that block to be accessed in nearby time- which is spatial locality.
by Veteran (420k points)
0
sir it is temporal locality in above u have said  because in temporal locality it is related to near by time i cant understand plzz

https://en.wikipedia.org/wiki/Locality_of_reference

according to above link it is temporal
+1

from your link copied "Temporal locality refers to the reuse of specific data, and/or resources, within a relatively small time duration. Spatial locality refers to the use of data elements within relatively close storage locations."

temporal says same data will be accessed again in near future while spatial says the near data will be accessed.

0
so what will be answer if we put more than one word in cache
0
We access one word- so putting more than one means neighboring data.
0
So how it is exploiting spatial locality..m nt undrrstanding?
+1
exploitation means . make use of. so it make use of spacial locality. i.e next reference is near to the accessed element in term of space .
0
Ohhkk.
0
Option says "spatial locality of reference **in a program**" not **in a block**. And I guess there is no such thing as spatial locality within block since block is target not source.
–1 vote

Question seems wrong. Let me explain.

First two points are:

"temporal locality of reference in a program"

"spatial locality of reference in a program"

Notice that they say "in a program". Now I guess both are true. For sequential instructions in program, spatial locality of reference will be exploited. It is somewhat not straightforward to imagine how temporal locality of reference can be exploited. WIkipedia definition of temporal locality:

If at one point a particular memory location is referenced, then it is likely that the same location will be referenced again in the near future.

Its incorrect to think that any program cannot exhibit above behavior. (For example for non sequential instructions say conditional jumps instructions, temporal locality of reference will be exploited.) And its incorrect to say that cache manager will not be able to predict such behavior and cache the more probable instructions out of sequential access (say jump target instruction). So temporal locality can very well be exploited.

In fact I feel any kind of locality of reference be exploited. Its just that cache manager needs to have that much of intelligence of what to cache and what not to. So saying that just XType of locality reference is exploited seems incorrect, since it negates possibility of exploiting other types of locality of references.

Option C is straight wrong. Miss ratio is reduced, not miss penalty. But are we supposed to make that difference between miss ratio and penalty?

I guess I will leave this question and will not attempt.

I will like someone proves me wrong.

by Active (2.4k points)
0
understand the question .... this line "More than one word are put in one cache block" is enough to say its  spatial locality .... instructions r stored in sequential manner .... the purpose of block offset in cache mapping is to ensure spatial locality ...

I guess the answer is D .

As we stuff words into blocks as block is a unit of data access .
This should have no relation between temporal/spatial locality /miss penalty .

It is an issue of memory being organized that way .

If the question was about transfer of data from memory to cache , (as in why data transfer takes place in blocks,not words)
then it should be B)

https://www.cs.umd.edu/class/fall2001/cmsc411/proj01/cache/matrix.html

by Active (1.2k points)

1
2