edited by
2,790 views
0 votes
0 votes
In this problem, you are to compare reading a file using a single-threaded file server and a multithreaded server. It takes $12$ msec to get a request for work, dispatch it, and do the rest of the necessary processing, assuming that the data needed are in the block cache. If a disk operation is needed, as is the case one-third of the time, an additional $75$ msec is required, during which time the thread sleeps. How many requests/sec can the server handle if it is single threaded? If it is multithreaded?
edited by

1 Answer

0 votes
0 votes
In the single-threaded case, the cache hits take 12 msec and cache misses take 87 msec. The weighted average is 2/3 × 12 + 1/3 × 87. Thus, the mean request takes 37 msec and the server can do about (1000/37=27) per second.

For a multithreaded server, all the waiting for the disk is overlapped, so every request takes 12 msec, and the server can handle (1000/12=83,3333) requests per sec

Related questions

658
views
1 answers
0 votes
admin asked Oct 24, 2019
658 views
In a system with threads, is there one stack per thread or one stack per process when user-level threads are used? What about when kernel-level threads are used? Explain.
370
views
0 answers
0 votes
admin asked Oct 24, 2019
370 views
In Sec. $2.3.4$, a situation with a high-priority process, H, and a low-priority process, $L$, was described, which led to $H$ looping ... the same problem occur if round-robin scheduling is used instead of priority scheduling? Discuss.
305
views
0 answers
0 votes
admin asked Oct 24, 2019
305 views
Can the priority inversion problem discussed in Sec. $2.3.4$ happen with user-level threads? Why or why not?
1.7k
views
0 answers
0 votes
admin asked Oct 24, 2019
1,733 views
Does Peterson’s solution to the mutual-exclusion problem shown in Fig. $2-24$ work when process scheduling is preemptive? How about when it is nonpreemptive?