search
Log In

Recent questions tagged cache-memory

0 votes
1 answer
1
1 vote
2 answers
2
In a particular system it is observed that, the cache performance gets improved as a result of increasing the block size of the cache. The primary reason behind this is : Programs exhibits temporal locality Programs have small working set Read operation is frequently required rather than write operation Programs exhibits spatial locality
asked Mar 31 in Operating System Lakshman Patel RJIT 158 views
0 votes
3 answers
3
0 votes
4 answers
5
6 votes
4 answers
6
A computer system with a word length of $32$ bits has a $16$ MB byte- addressable main memory and a $64$ KB, $4$-way set associative cache memory with a block size of $256$ bytes. Consider the following four physical addresses represented in hexadecimal notation. $A1= 0x42C8A4$ ... same cache set. $A3$ and $A4$ are mapped to the same cache set. $A1$ and $A3$ are mapped to the same cache set.
asked Feb 12 in CO and Architecture Arjun 2.6k views
4 votes
2 answers
7
How many total bits are required for a direct-mapped cache with $128$ KB of data and $1$ word block size, assuming a $32$-bit address and $1$ word size of $4$ bytes? $2$ Mbits $1.7$ Mbits $2.5$ Mbits $1.5$ Mbits
asked Jan 13 in CO and Architecture Satbir 983 views
1 vote
3 answers
8
Which of the following is an efficient method of cache updating? Snoopy writes Write through Write within Buffered write
asked Jan 13 in CO and Architecture Satbir 889 views
0 votes
0 answers
9
The performance of a file system depends upon the cache hit rate (fraction of blocks found in the cache). If it takes $1\: msec$ to satisfy a request from the cache, but $40\: msec$ to satisfy a request if a disk read is needed, give a formula for the mean time required to satisfy a request if the hit rate is $h.$ Plot this function for values of $h$ varying from $0$ to $1.0.$
asked Oct 27, 2019 in Operating System Lakshman Patel RJIT 88 views
0 votes
0 answers
10
0 votes
2 answers
11
To use cache memory, main memory is divided into cache lines, typically $32$ or $64$ bytes long. An entire cache line is cached at once. What is the advantage of caching an entire line instead of a single byte or word at a time?
asked Oct 21, 2019 in Operating System Lakshman Patel RJIT 112 views
1 vote
0 answers
12
Given the following information: TLB hit rate 95%, TLB access time is 1 cycle. cache hit rate 90 %, cache access time is 1 cycle. When TLB and cache both get miss; page fault rate is 1% The TLB access and acache access are sequential. ... 75 cycles Access to hard drive requires 50,000 cycles. Compute the average memory access latencies when the cache is physically addresses (in cycles).
asked Mar 10, 2019 in CO and Architecture s_dr_13 319 views
0 votes
1 answer
13
Memory is word addressable with 16 bit addresses Word size=16 bits Each block is of size 16 bits. The cache contains 8 blocks. What is the address division for: 1>direct. 2>associative 3>set associative cache
asked Feb 17, 2019 in CO and Architecture DIYA BASU 200 views
12 votes
7 answers
14
A certain processor uses a fully associative cache of size $16$ kB, The cache block size is $16$ bytes. Assume that the main memory is byte addressable and uses a $32$-bit address. How many bits are required for the Tag and the Index fields respectively in the addresses generated by the processor? $24$ bits and $0$ bits $28$ bits and $4$ bits $24$ bits and $4$ bits $28$ bits and $0$ bits
asked Feb 7, 2019 in CO and Architecture Arjun 7.7k views
25 votes
9 answers
15
A certain processor deploys a single-level cache. The cache block size is $8$ words and the word size is $4$ bytes. The memory system uses a $60$-MHz clock. To service a cache miss, the memory controller first takes $1$ cycle to accept the starting ... bandwidth for the memory system when the program running on the processor issues a series of read operations is ______$\times 10^6$ bytes/sec
asked Feb 7, 2019 in CO and Architecture Arjun 6.5k views
2 votes
0 answers
16
A hypothetical processor on cache read miss requires one clock to send an address to Main Memory (MM) and eight clock cycles to access a 64-bit word from MM to processor cache. Miss rate of read is decreased from 14.8% to 2.6% when line size of cache ... words. The speed up of processor is achieved in dealing with average read miss after increasing the line size is_____ (Upto 2 decimal places)
asked Feb 1, 2019 in CO and Architecture newdreamz a1-z0 214 views
0 votes
1 answer
17
For a 4 bit set associative cache 10 bits are required as index to specify cache block. The main memory is of size 4G x 32. Size of cache memory is? Answer is 4096*49. Please explain the notation also.
asked Feb 1, 2019 in CO and Architecture Aman Janko 191 views
0 votes
1 answer
18
Consider the hypothetical processor is supports both 2 address and one address instructions. It has 128-word memory A 16-bit instruction is placed in the one memory word. Q1.What is the range of two address and one address instructions are supported? A)1 to 3 and 128 ... address instructions can be supported? A)128 B)2 C)256 D)32 PLEASE GIVE SOLUTION IN DETAILED MANNER...ESPECIALLY FOR PART 1.
asked Feb 1, 2019 in CO and Architecture learner_geek 945 views
3 votes
1 answer
19
Consider a n-way cache with 'x blocks of 64 words each. The main memory of the system is having 8 million words. Size of the tag field is 16 bits and additional memory required for tags is 1024 bytes. What will be the values of n and x respectively? Answer 256 512
asked Jan 30, 2019 in CO and Architecture Ram Swaroop 435 views
2 votes
1 answer
20
A CPU cache is organized into 2 level cache L1 and L2 The penalty for L1 cache miss and L2 cache miss are 60 and 30 respectively for 1200 memory references The hit time of L1 and L2 are 5 and 10 clock cycles and penalty for L2 cache miss to main memory is 70 clock cycles. The average memory access time will be
asked Jan 29, 2019 in CO and Architecture Ram Swaroop 427 views
...