1,746 views
A device with data transfer rate of 8 KBps is connected to a CPU. Data is transferred bytewise. Let interrupt overhead be 2 μsec. The byte transfer time between the device interface register and CPU or memory is negligible. The minimum performance gain of operating the device under interrupt mode over operating it under program-controlled mode is __________ . (Upto 1 decimal places)

### Subscribe to GO Classes for GATE CSE 2022

In Programmed I/O, the CPU issues a command and waits for I/O operations to complete.

So here, CPU will wait for 1 sec to transfer 8 KB of data.

overhead in programmed I/O =1 sec

In Interrupt mode , data is transferred word by word (here word size is 1 byte as mentioned in question
"Data is transferred byte-wise").
So to transfer 1 byte of data overhead is 2×10^−6 sec
Thus to transfer 8 KB of data overhead is= 2×10^−6×8×10^3  sec

Performance gain  =1/ 2×10^−6×8 ×10^3=62.5

In Interrupt I/O formula is : x/y

Programmed control I/O : x/x+y

Here x=time taken by cpu in interrupt processing and y = time taken by i/o device to transfer data.

Data transfer rate of 8 KBPS

so,  8 * 210 B transfer in 1 seconds

1 B transfer in  (1/8192 ) * 106  μsec = 0.000122 * 106  μsec = 122.07   μsec = 122  μsec

hence y = 122 and x = 2

for Interrupt I/O processor time consumed is : (2/122) * 100 = 1.63 %

for Programmed I/O mode , processor time consumption is : ( 2/2+122) * 100 = 1.61 %

The minimum performance gain of operating the device under interrupt mode over operating it under program-controlled mode is:

( 1.63-1.61/1.61 )  = 0.63

hence performance gain is 0.6

Same type of question :

https://gateoverflow.in/114797/interrupt-i-o

by

In case of programmed I/O :

The problem is , because it is the CPU initiating, as an when the device is ready, the CPU may not be
aware of it and hence it may not scan that particular device ; that is check the status of the device and know
that it is ready. So there is a possibility that the data could be lost.

In this case we need to consider this search time as a overhead time. The time when cpu check each device ,  to see if it is ready or not for data transfer .

But in case of Interrupt I/O  Mode : When device is ready it will indicate to the CPU that it is ready by generating what is known as an interrupt signal. So here no time waste ..hence we should not consider that overhead time .

And yes, whether it is programmed i/o or interrupt i/o , final transmission to memory is done via memory interrupt . In case of DMA , that direct memory access happen without any interrupt from cpu .

https://gateoverflow.in/1392/gate2005-69

its also a same type of question @Bikram sir

here in programmed i/o we never consider intr overhead which is given in question..check

I think this is an incorrect solution because a similar problem was asked in Gate https://gateoverflow.in/1392/gate2005-69

sir, i think u have dragged this easy question in wrong direction,

for programmed i/o time req to send 1B data directly is 10^-3/8 sec this will same as cpu time coz cpu will be idle doing nothing.

while in interrupt driven i/o while 1B of data i/o is transferring cpu  will be busy doing other task, once it will get interrupt signal which takes 2×10^-6 sec + data transfer from buffer to mm is nothing so total time will be 2×10^-6 sec only.

so performance gain (we know performance gain is given as in form of speed up which is time taken by old one divide by new one) will be time taken by programmed mode  by interrupt mode

that is: 10^-3/8/2×10^-6=1000/16=62.5
In programed I/O, CPU does continuous polling,

8*103 byte …………….. 1 sec

1 byte …………………….. 1/ 8* 103 = 125 μsec

In interrupt mode CPU is interrupted on completion of i\o,
To transfer 1B CPU does 2 μsec of processing(since transfer time between other components is negligible).
Gain = 125 / 2= 62.5
in programmed control it will continuously poll the device so for 1 byte to get ready is wasted so 1B/8KBps so 125micro sec

in interrupt mode it will be 2micro sec so performance increase will be 125/2 = 62.5

But how come did you know that CPU polls the device at 8KBps speed ? Or CPU should do polling atleast at 8KBps ?
it polls until the 1B is ready for the transfer
I am getting 62.5..wat is the answer?
@pavan. You mean when device tells CPU that I want to transfer data, CPU will keep polling till that 1 byte is ready?
1 sec=8kb

so for 1byte=1/8k=8miliseonds=0.125 ms=125 micro seconds

and interrupt i/o time=trasfer time+interrupt overhead

interrupt overhead=2 micro second

transfer time is given negligible->0

interrupt i/o time=2 micro second for 1 byte

performance gain=125/2=62.5 micro second