3,247 views

Which one of the following statements is $\text{FALSE}?$

1. The $\text{TLB}$ performs an associative search in parallel on all its valid entries using page number of incoming virtual address.
2. If the virtual address of a word given by $\text{CPU}$ has a $\text{TLB}$ hit, but the subsequent search for the word results in a cache miss, then the word will always be present in the main memory.
3. The memory access time using a given inverted page table is always same for all incoming virtual addresses.
4. In a system that uses hashed page tables, if two distinct virtual addresses $\text{V1}$ and $\text{V2}$ map to the same value while hashing, then the memory access time of these addresses will not be the same.

$\underline{\textbf{Answer: C }}$

$\underline{\textbf{Explanation: }}$

$\textbf{Option (A):}$

TLB is like a Fully Associative Cache. We know that in the fully associative cache, the search is made on the basis of $\color {red}{\textbf{Content}}$ rather than the $\color {red}{\textbf{Address}}$. It matches all the Tags parallelly and at once. Therefore, Option (A) looks TRUE as well.

$\textbf{Option (B):}$ This option is TRUE as well.

$\underline{\textbf{Control Flow:}}$

1. First, we go to Cache Memory, and if there is a cache hit, then no problem.
2. In case there is a cache miss, we go to the next step.
3. In this step we will go to the TLB, if there is TLB Hit. Then, we will go to the physical memory using the physical address.
4. In case there is a TLB Miss, then we will access the “page table”, in order to obtain the frame number.
5. If the page is not found, then it means that there is a “page fault”. At this juncture, we will use one of the page replacement algorithms, in order to obtain the page from secondary memory to physical memory.

An inverted Page Table is nothing but a Hash Map. So, with the same logic as D option C looks False.

$\textbf{Option (D):}$
This is true as when two values are mapped to the same address, then the values are added in form of a linked list. In the best case for the first element of the linked list, time complexity will be $\mathbf{O(1)}$, and for the last element of the chained linked list, time complexity will be $\mathbf{O(n)}$.

$\textbf{ Good Links:}$

https://stackoverflow.com/questions/37825859/cache-miss-a-tlb-miss-and-page-fault

https://cs.stackexchange.com/questions/66698/difference-between-page-table-and-inverted-page-table

by

### 1 comment

This video will be helpful for option D.

Ans: Option C

1. According to Galvin, “The TLB is associative, high-speed memory. Each entry in the TLB consists of two parts: a key (or tag) and a value. When the associative memory is presented with an item, the item is compared with all keys simultaneously. If the item is found, the corresponding value field is returned.” Hence option A is TRUE.

1. The TLB contains only a few of the page-table entries. If the virtual address of a word given by CPU has a TLB hit it means the the page number is found, so its frame number is immediately available and is used to access memory. Therefore even if the subsequent search for the word results in a cache miss, it does not matter. Its just not in cache memory but the word will always be present in the main memory as we already know the frame number due to the TLB Hit. Hence option B is also TRUE.

1. An Inverted Page Table has frame numbers as indexes unlike a Page Table where page number is used as index to it. Therefore for any given virtual address we need to search the exact $<page no., process no.>$ in the frames of MM. In worst case we might have to search all the frames of the MM inorder to get the exact $<page no., process number>$. (This is because we may find the required $pg. no.$, but it might be of a different process. So Bad Luck!) This increases main memory access time. This is the reason why the Inverted Page table uses less space but searching time is more. Therefore, the memory access time using a given inverted page table is not always same for all incoming virtual addresses. Hence option C is FALSE and is the ans.

1. In a system that uses hashed page tables, if two distinct virtual addresses V1 and V2 map to the same value while hashing, it means they will be present as a linked list of items having same hashed values. Then the memory access time of these addresses will not be the same as in a Linked List the best case T.C = Omega(1) for the first element and worst case T.C. = O(N) for the last element in chained LL. Hence option D is also TRUE.

Very nice explanation
edited ago

This video will be helpful for Option C.

This video will be helpful for Option D.

Just to add to the above answer, in option $B$, there is a $TLB$ hit but a cache miss.

$TLB$ gives the frame number and the resulting physical address is used for searching the cache

This means that the $TLB$ is placed before the cache and cache is $PIPT$ ( Physically-Indexed-Physically-Tagged )

For details, refer https://en.wikipedia.org/wiki/Translation_lookaside_buffer#Overview