160 views

Can anyone explain how the SJF approximation algorithm works?

I can see that it predicts the next CPU burst. But how exactly is it selecting a process with this predicted next CPU burst?

Preemption is not possible with SJF approximation, isn’t it?

here’s a screenshot from Galvin. Methods like Exponential average, simple average etc help to predict the next cpu burst based on the past history of cpu burst. you can check here or in galvin.

SJF can be preemptive and non-preemptive. Preemptive SJF is generally called as “Shortest remaining time first (SRTF)”

Below is an excerpt from Tanenbaum. This clarified all my doubts about SJF approximation. I was confused because of the assumption that each process has only one CPU burst. So, We predict the next CPU burst of the process that is running but not for some different process that is in the waiting queue. Preemption is possible in SJF approximation too.

Shortest Process Next

Because shortest job first always produces the minimum average response time for batch systems, it would be nice if it could be used for interactive processes as well. To a certain extent, it can be. Interactive processes generally follow the pattern of wait for command, execute command, wait for command, execute command, etc. If we regard the execution of each command as a separate ‘‘job,’’ then we can minimize overall response time by running the shortest one first. The problem is figuring out which of the currently runnable processes is the shortest one.

One approach is to make estimates based on past behavior and run the process with the shortest estimated running time. Suppose that the estimated time per command for some process is T0. Now suppose its next run is measured to be T1. We could update our estimate by taking a weighted sum of these two numbers, that is, a*T0 + (1 − a)*T1. Through the choice of a we can decide to have the estimation process forget old runs quickly, or remember them for a long time. With a = 1/2, we get successive estimates of

T0, T0/2 + T1/2, T0/4 + T1/4 + T2/2, T0/8 + T1/8 + T2/4 + T3/2

After three new runs, the weight of T0 in the new estimate has dropped to 1/8.

The technique of estimating the next value in a series by taking the weighted average of the current measured value and the previous estimate is sometimes called aging. It is applicable to many situations where a prediction must be made based on previous values. Aging is especially easy to implement when a = 1/2. All that is needed is to add the new value to the current estimate and divide the sum by 2 (by shifting it right 1 bit).