GATE1998-2.16

3.6k views

The overlay tree for a program is as shown below:

What will be the size of the partition (in physical memory) required to load (and run) this program?

1. $\text{12 KB}$
2. $\text{14 KB}$
3. $\text{10 KB}$
4. $\text{8 KB}$

edited

"To enable a process to be larger than the amount of memory allocated to it, we can use overlays. The idea of overlays is to keep in memory only those instructions and data that are needed at any given time. When other instructions are needed, they are loaded into space occupied previously by instructions that are no longer needed." For the above program, maximum memory will be required when running code portion present at leaves. Max requirement $=$ (max of requirements of $D,E,F$, and $G$. $=MAX( 12,14,10,14) =14$ (Answer)

edited
0
0
Why can’t it be root + any one of the nodes? suppose we need the root for the basic data structures and one node at a time
DF travelling the tree, we find Root->A->E & Root->C->G have the maximum weights (2+4+8=14) & (2+8+4=14) respectively.

Ans :B
0

Hi @Pronomita Dey 1 ji,

traverse this as depth first left to right. when we visit a node for first time, it is loaded in main memory and when we visit it for last time, it is pulled out. so max size will be needed when we pull out D and load E in the main memory which is 14 KB.
When the size of the program to be execute is bigger than main memory, than we can divide the program into modules, and then we can execute those modules independently (the modules size should be such that, the module+overlay driver can be stored in main memory) when one module finished we can load another by using overlay driver.

In this question to run the program we need 14 kb as it's maximum module size.

Difference between overlay and virtual memory is, virtual memory is taken care by OS (user dont need to write any software like overlay driver) whereas in overlay, user write the overlay driver so that we can run a process whose size is bigger than main memory.

Related questions

Consider $n$ processes sharing the CPU in a round-robin fashion. Assuming that each process switch takes $s$ seconds, what must be the quantum size $q$ such that the overhead resulting from process switching is minimized but at the same time each process is guaranteed to get its turn at the CPU at least every $t$ ... $q \geq \frac{t-ns}{n-1}$ $q \leq \frac{t-ns}{n+1}$ $q \geq \frac{t-ns}{n+1}$
In a computer system where the best-fit' algorithm is used for allocating jobs' to memory partitions', the following situation was encountered:$\begin{array}{|l|l|} \hline \textbf{Partitions size in$KB$} & \textbf{$4K \ 8K \ 20K \ 2K$} \\\hline \textbf{Job sizes in$KB$} & \text{$2K ... $} \\\hline \end{array}$When will the $20K$ job complete?