edited by
4,617 views
3 votes
3 votes

Overlaying

  1. requires use of a loader
  2. allows larger programs, but requires more effort
  3. is most used on large computers
  4. is transparent to the user
edited by

5 Answers

10 votes
10 votes

whenever the process is running it will not use complete program same time.So overlays concept says that whatever part required now load that part only.once part loaded part is done just unload it and load new part which required next and run it.

So in order to increase the size of the partition, we use this concept through this, we can run the large program. even virtual memory concept is like this but the difference is overlays has to be done by the user(means we have to write program for overlays no support from OS is provided) but in case virtual memory taken care by operating system whereas overlays need to be taken care by the application programmer is it required more effort. this is what i know correct me if I am going wrong.

so, I think the answer should be "B".

edited by
1 votes
1 votes

ans is (d)

overlaying means "the process of transferring a block of program code or other data into internal memory, replacing what is already stored".[1] Overlaying is a programming method that allows programs to be larger than the computer's main memory.[2] An embedded system would normally use overlays because of the limitation of physical memory, which is internal memory for a system-on-chip and the lack of virtual memory facilities.

Constructing an overlay program involves manually dividing a program into self-contained object code blocks called overlays laid out in a tree structureSibling segments, those at the same depth level, share the same memory, called overlay region or destination region. An overlay manager, either part of the operating system or part of the overlay program, loads the required overlay from external memory into its destination region when it is needed. Often linkers provide support for overlays

1 votes
1 votes

Overlaying is an obsolete Memory Management Technique that helps the machine run processes that exceed its physical memory size.

Overlaying is mostly replaced by Paging these days.


Option A is incorrect as loader is to load processes in the Main Memory.

Option B is correct as it allows larger programs, but it requires the programmer to divide the code into independent segments.

Option C is incorrect because "large computers" probably have large physical memories.

Option D is correct as the process of overlaying isn't explicitly visible to the programmer (that's the definition of transparency in computer science. Funny, eh?)


Options B and D both. But one can debate on the "effort" aspect of Option B. Maybe a programmer does it effortlessly? Option D, however, is a fact and hence more appropriate.

0 votes
0 votes
Where is Question ?
Answer:

Related questions

1 votes
1 votes
1 answer
1
go_editor asked Jun 13, 2016
4,641 views
The page replacement algorithm which gives the lowest page fault rate isLRUFIFOOptimal page replacementSecond chance algorithm
9 votes
9 votes
2 answers
2
go_editor asked Jun 13, 2016
5,847 views
The performance of Round Robin algorithm depends heavily onsize of the processthe I/O bursts of the processthe CPU bursts of the processthe size of the time quantum
4 votes
4 votes
1 answer
3
go_editor asked Jun 12, 2016
2,498 views
Consider a logical address space of 8 pages of 1024 words mapped into memory of 32 frames. How many bits are there in the logical address?13 bits15 bits14 bits12 bits
8 votes
8 votes
2 answers
4
go_editor asked Jun 12, 2016
4,044 views
Which of the following need not necessarily be saved on a Context Switch between processes?General purpose registersTranslation look-aside bufferProgram counterStack poin...