Recent posts in Operating System


Here is a link to concise notes for the entire OS. The notes are prepared from Galvin


tusharb posted in Operating System Mar 28, 2022 edited Mar 28, 2022 by tusharb
by tusharb

You can watch the below video for clearing concepts related to multi-level paging:


Arjun posted in Operating System Jan 27, 2019
by Arjun


Day Date Contents Slides Assignments
 1  Oct 31 Introduction to Operating System
Process, program, How process works?
Components of process 
fork() call- cloning, parent and child process, example program
 2  Nov 9 Process- process states- New(process starts but not ready for execution- 
waiting for resources), Ready, Running, Blocked, Terminated
Running to ready state- scheduler voluntarily stops a process(preemptive)
Running to blocked- I/O wait
Process Control Block- state, ID, address space pointer, open file descriptor
Address space- stack, heap, data, code
Context switch between processes- things to save- PC, register file, virtual memory info
Cache on context switch- invalidate or add process ID in the cache
Is context switch expensive? - Yes
Speed of memory is 100 times slower but pipelined execution reduces this time
Parallelism inside CPU- given a sequence of instruction execution time 
can't be predicted
context switch- transparent to the user
Threads- light weight process(less switching time)
-Kernel level and user level threads
Assignment 2
Assignment 3
 3  Nov 10 Is a single processor system, is there any advantage of multithreading?- Yes in case 
of IO bound job, but if the process is only CPU bound then there is no advantage
Is the physical memory divided for each OS? No physical memory is RAM and OS 
resides in disk which is independent of the physical memory.
Single CPU with multiple logical cores- SMT- simultaneous multithreading
Long-term - can run complex scheduling algorithms, takes job from disk to the ready queue
Short-term - order in which job executes in CPU
Medium term- swaps in and out jobs from CPU
Dispatcher- After short- term scheduler decides next process dispatcher takes care
 of context switch
CPU Scheduling- Utilization, throughput, turn-around time, waiting time, response time
 4  Nov 11 Process- fork
child process duplicate the parent address space
fflush(stdout)- usage
Will fork work without virtual memory?- Relative addressing works but if absolute addressing
is used both child and parent process writes to same location
How to share data among processes?- Message passing, shared memory-common address
space ensures view serializability
Scheduling algorithms
1. FIFO - easy implementation, convoy effect
2. SRTF- shortest remaining time first- pre-emptive
    SJF- shortest job first- non preemptive
3. Highest response ratio first
4. Round Robin- FCFS, preemptive FIFO
    Time interval increases, response time decreases
5. Priority based
burst time, advantage of preemption
Scheduler sends interrupt to switch processes
 5  Nov 12 Is CPU utilization better for i)preemptive or ii) non-preemptive algorithms?
Not much difference as it depends on the set of processes
SJF- CPU utilisation can be low as at any instant all process are waiting for IO
Virtual Memory- it gives 'VIEW' of very large physical memory
Memory management unit
Virtual Memory- runs a program which is larger than physical memory
Virtual address to physical address mapping
If all the processess are less than main memory is there any use of virtual memory?
In multi-user system - virtual address space is partitioned
Paging- internal fragmentation
Segmentation- external fragmentation
 6  Nov 13 Indexing analogy to paging
In paging no need to store key value in multiple levels as an array(direct index) can be used to 
store physical address pointers.
First level of page table should be in main memory
For paging- page tables and page frame sizes are different
Paging done by OS for itself
Segmentation- variable size segments with base address and limit register
In paging can the page table point to other location than MM?
Yes it can point to disk or I/O devices
What happens when swap space is 0?- Programs larger than physical 
memory cant be run
Cache - in CPU, OS don't care
VM- is it transparent to CPU?- VA generated by CPU
All addresses generated by CPU is logical
How many memory accesses in VM?
one page table lookup(in main memory) and actual memory access
What if there is cache?- only one memory access from TLB(more than 1)
VM cannot be faster than physical addressing
TLB is a small cache in CPU that caches only page table entries
 7  Nov 14 Problems on multi-level paging- number of bits required for addressing next level 
page table/page frame in each of the level
Page table will always be in main memory
Inverted page table- indexed using physical address 
No direct mechanism of sharing of pages
same virtual address present in inverted page table can be distinguished by process id
Page replacement
page fault service
optimal page replacement
FIFO- Belady's anomaly- page fault may increase on increasing number of
frames with same page size
LRU- least recently used page is replaced, 
MRU- most recently used page is replaced,
Random page replacement
8 Nov 15 Paging- multilevel paging, examples- number of address bits used in each level, 
size of page table used in paging, number of levels in paging
9 Nov 19 Cache misses- with examples
Process synchronisation
Critical section problems, solution- mutual exclusion, progress, bounded waiting
Peterson's solution
10 Nov 21 Test and set instruction, mutex, semaphore, 
Test and set satisfying bounded waiting for n processes
priority inversion with example
 11 Nov 22 If mutual exclusion is not satisfied, progress is satisfied?Yes
No starvation implies progress
Deadlock implies starvation
Deadlock- prevention and avoidance
If we prevent deadlock, we avoid deadlock
If we prevent deadlock, we avoid deadlock
Deadlock can be avoided by other means too.
Four conditions of deadlock
1)mutual exclusion-to break share resource 
2)hold and wait, 
3)no preemption, 
4)circular wait
Deadlock implies unsafe state
Unsafe state does not imply deadlock
From safe state process can move to unsafe state if more resources are allocated
Why we need Banker's algorithm?
In resource allocation graph only one instance of a resource can be represented
Example of Banker's algorithm
 12 Nov 23 File system
Disk structure- platters, cylinder, track, sector, read/write head
Disk access time-seek time, rotational latency, transfer time
linear velocity, angular velocity
Files-access methods- sequential and direct, mounting, shortcuts- hardlink, symbolic
File structure- tree structure, acyclic graph(shortcuts), directory- single level, multilevel
File allocation methods- contiguous, linked, indexed
Levels of indirection in indexed file
Disk scheduling algorithms- FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK
RAID- levels
Manoja Rajalakshmi A posted in Operating System Nov 23, 2018
CMSC 421 Operating Systems Lecture Notes
(c) 1994 Howard E. Motteler

Interprocess Communication

(Section 2.2 in Tannenbaum)

Interprocess communication is fundamental to

  - client-server applications (e.g. X windows)
  - modern threaded operating system design
  - many systems utilities in *any* OS (e.g. print spoolers)
  - parallel processing

There are two general models of interprocess communication

  - Shared memory with synchronization operations
  - Message passing

Shared Memory with Synchronization Operations

Shared memory must be managed carefully.

Example of 2 processes writing to a shared buffer:

Suppose that both the pointer p and the buffer it points to are
shared by processes P0 and P1

[ show pix of buffer and pointers ]

Suppose that

      -  P0 contains  *p++  =  a ;
      -  P1 contains  *p++  =  b ;

If both P0 and P1 execute these statements once, we expect

      -  p is incremented twice
      -  both  a  and  b  are saved (in some order)

The statement *p++ = a typically is implemented by at least 
two machine instructions

      -  *p = a ;
      -  p++ ;

Consider the following possible sequence of events, where two
processes both copy data to a buffer

P0:                 P1:
     :                                       |
  *p = a ;                                   |
  (context switch)                         time
                         :                   |
                      *p = b ;               |
                       p++   ;               V
                      (context switch)
   p++   ;

The value `a' has been overwritten by `b'

This is sometimes called a "race condition," because the
final result depends on instruction timing

The *last* variable assigned "wins", i.e., gets saved

Mutual Exclusion in Critical Sections

A ``critical section'' is a part of a program that accesses
shared data

A program can have several critical sections for different

The ``mutual exclusion problem'' is to find a way to guarantee
that only one process at a time can access the shared data

A solution to the mutual exclusion problem should satisfy

  1. `mutual exclusion:' only one process in a critical 
      section at a time

  2. `progress:' a process outside a critical section can't
      hold up others that want into that critical section
  3. `bounded waiting:' a process waiting to enter a
      critical section eventually gets in

  4. `timing:' no assumptions are made about speed, timing, 
      or number of CPUs

Disabling Interrupts

The simplest way to get mutual exclusion is to disable
interrupts when a processes enters a critical section

Without clock interrupts, there's no context switch

Problem: user would need special `enter' and `exit' critical
section commands, and these commands could be used to hog the

Disabling interrupts is sometimes useful for the kernel itself

Two-process software solutions to the Mutual Exclusion Problem

We will try and develop some code to ``guard'' a critical
section, so it can be shared by two processes

All these methods use ``busy waiting'' which is sometimes

Simple Locks

Suppose we have a shared `lock' variable, initially 0 and set to
1 whenever a process is in its CS

P0, P1:
   while ( lock != 0 ) ;     // wait until lock is 0
   lock = 1                  // set lock
   CS() ;                    // enter the critical section
   lock = 0 ;                // reset lock


    P0:                     P1:
    finds lock = 0,
    (context switch)
                            finds lock = 0,
                            sets lock = 1,
                            enters CS
                            (context switch)
    sets lock = 1,
    enters CS

Mutual exclusion fails just as in our initial example
with the shared buffer


Suppose we allow the processes to take `turns'

while (alive) {
   while ( turn != 0 ) ;     // wait until turn = 0
   CS() ;                    // enter the critical section
   turn = 1 ;                // other process's turn

while (alive) {
   while ( turn != 1 ) ;     // wait until turn = 1
   CS() ;                    // enter the critical section
   turn = 0 ;                // other process's turn

Mutual exclusion is satisfied, but `progress' fails

Suppose P0 enters and leaves its critical section

Then P0 cannot re-enter the critical section until P1 enters and
leaves its critical section

Flags for `in the CS'

Idea: use two flags, both FALSE initially; flag0 is true 
when P0 is in CS, flag 1 is true when P1 is in the CS

while (alive) {
   while ( flag1 ) ;   // wait until flag1 = false
   flag0 = TRUE ;
   CS() ;
   flag0 = FALSE ;

while (alive) {
   while ( flag0 ) ;   // wait until flag0 = false
   flag1 = TRUE ;
   CS() ;
   flag1 = FALSE ;

Mutual exclusion fails:

    P0 does `while' test, finds flag1 = FALSE
    P1 does `while' test, finds flag0 = FALSE
    P0 sets flag0 = TRUE and enters CS
    P1 sets flag1 = TRUE and enters CS

Flags for `want to enter' (similar to previous example)

while (alive) {
   flag0 = TRUE ;
   while ( flag1 ) ;   // wait until flag1 = false
   CS() ;
   flag0 = FALSE ;

while (alive) {
   flag1 = TRUE ;
   while ( flag0 ) ;   // wait until flag0 = false
   CS() ;
   flag1 = FALSE ;

In this case `progress' fails

P0 sets flag0 = true
P1 sets flag1 = true

and both are stuck!

Peterson's Solution

Use both a turn and flag variables; both flags are initially
FALSE.   (Flags are for "want to enter")

while (alive) {
   flag0 = TRUE ;
   turn = 1 ;
   while ( flag1 && turn == 1 ) ;   // wait until flag1 = FALSE 
   CS() ;                           //            or turn = 0
   flag0 = FALSE ;

while (alive) {
   flag1 = TRUE ;
   turn = 0 ;
   while ( flag0 && turn == 0 ) ;   // wait until flag0 = FALSE
   CS() ;                           //            or turn = 1
   flag1 = FALSE ;

Partial analysis

  -Suppose P0 wants CS, and P1 is not in and does not want CS:
   Then P0 enters CS, as flag1 is FALSE (part of progress)

  -Suppose P0 wants CS, and P1 is in CS.
   Then P0 waits for flag1 to go FALSE (mutual exclusion)

  -Suppose both P0 and P1 want CS
   Then the *first* one to set turn is the one to get in
   (mutual exclusion & progress)

   P0  flag0 = TRUE;
   P1  flag1 = TRUE;
   P0  turn  = 1;
   P1  turn  = 0;                   P1 is second to set turn
   P0  while (flag1 && turn == 1)   false, turn is 0, so enter CS
   P1  while (flag0 && turn == 0)   true, loop until P0 leaves CS,
                                    enter CS when flag0 goes FALSE
Must check bounded waiting:

  - Suppose both P0 and P1 want to enter the CS
  - P0 sets turn=1, P1 sets turn=0, P0 enters CS
  - P1 is waiting in while loop for either flag0 to be FALSE,
    or for turn to be 1

  - Suppose P0 exits CS
  - P0 sets flag0 = FALSE;
  - Suppose P1 is unlucky
  - P0 sets flag0 = TRUE before next P1 test
  - P0 sets turn = 1
  - P1 enters CS on turn = 1

Hardware for Critical Sections

The `Test & Set Lock' Instruction

Suppose we have a machine instruction TSL(v)
that `atomically' (i.e., uninterruptibly) 
      - sets v to TRUE
      - returns the old value of v

Then it is very easy to protect a critical section with a
`lock' variable:

entry:   while TSL(lock);     // wait until lock is FALSE
exit:    lock = FALSE;

How it works:

Suppose we hit `entry' and lock is
bahirNaik posted in Operating System Feb 1, 2016
To see more, click for the full list of questions or popular tags.