+11 votes
337 views

Consider a Computer system with a single core CPU, a single level of cache of size $4 \text{MB}$, and main memory. It takes one CPU cycle to access a memory byte if it is in cache, and $145$ cycles if the memory access incurs a cache miss and must be fetched from main memory. The size of the cache line is $64$ bytes. Consider two arrays $A$ and $B$, each of $N=2^{20}$ integers (assume that an integer requires $4$ bytes of storage). The arrays are stored contiguously in memory, and are aligned at cache line boundaries. The below code shows an access pattern of the arrays $A$ and $B$. Calculate the average time (in CPU cycles) required to access a single element of array $A$ (averaged overall accesses to $A$). Assume that the cache is empty at the start of every scenario, and no other process is using the cache and the cache does not use any optimizations like prefetching.
A direct mapped cache, and every element of $A$ and $B$ is read in sequence as follows:

for (i=0;i<N;i++)
{
read A[i];
read B{i];
}

recategorized | 337 views
0
See, I did like this, please tell me where I went wrong.

#lines in cache=$2^{16}$

Now Array $A[220],B[220]$ are accessed.Each element of int=4 Bytes.

So, 1 Main memory block can store $\frac{64}{4}=16$ elements of int type.

Now Array A[] would require $\lceil\frac{220}{16}\rceil=14$ blocks of MM. and Similar would be required by array B[].

So, total 28 blocks would be required and since arrays are contiguously allocated and cache is large enough to hold both arrays at once, so no conflict miss will occur.

Now this is the code

for (i=0;i<N;i++)
{
read A[i];
read B{i];
}

So, every block for array A and Array B will be mapped onto different line of cache, so no conflicts.

Total 220 accesses to array A.

1 Miss per 16 element access to Array A.

Therefore 14 misses for Array A.So, number of hits=$220-14=206$

So, average access time for one element of Array A=$\frac{(206 \times 1)+(14 \times 145)}{220}=10.16$

Please tell where I went wrong?
0

@Ayush Upadhyaya There has been slight misprint. It's not 220 it's 2^20 and answer is 145. I have asked @Arjun sir to update it.

+9
A[i] and B[i] will always map to the same cacheline, because they are 4MB apart. So every access of A will be a miss because accessing B will flush out previous fetched line of A. So average access time = 145 cycles.

You don''t need to calculate even. Just simple observations would do !!!
0
Yes, now it's very intuitive.Thank you :)
+1

@Ruturaj Mohantyhow do we know that A[i] and B[i] will always map to the same cache line?

+2
Because Array A and Array B are 4MB apart.
0

@Ayush Upadhyaya :( how can you be so sure about it? for both the arrays for each respective block the index bits should be same and we even don't at what memory addresses arrays are stored.

0

@Ruturaj Mohanty Is it not 146 cycles? Because we will first check in cache that will take 1 cycle and it will be a miss then will check in main memory and it will take 145 cycles. So total access time = 1+145 = 146. Kindly correct me if i am wrong.

0
145 cycle includes that already. No need to add one extra 1.
0

@Ayush Upadhyaya Should not it be 146 since we first check it in cache and then retrieve it from main memory?

+1

@satendra-No.The question clearly has provided you total time on cache miss and cache hit.You don't need to further think about it.

0

@Ayush Upadhyaya had it been given that the 145 cycles is the penalty to load a block from main memory and access the required byte ,then it would have been 146.Am i correct?

+3

It's a direct mapped cache and size of the cache is 4MB so if 1st element of A goes to block0 then 1st element of B will also go to the block0 because they are 4MB apart.(2^20 * 4  = 4MB)

Answer:

+6 votes
1 answer
3
+8 votes
2 answers
4