First of all we need to understand why we needed token bucket when leaky bucket was working fine(Really?) :
(1)When the bucket became full, packets loss used to occur, but in token bucket, token loss occurs and packets remain safe.
(2)Leaky bucket does not allow hosts to send bursts of data or in simpler terms when a host is idle, in leaky bucket a host cannot save up on bytes and send data at full throughput when data appears whereas in token bucket idle hosts can save up on permission to send large bursts of data for short periods of time.
Let us Understand with help of diagram. (Reference-Tannenbaum)
Fig 5.25(a) Shows host wants to send data at 25MB/sec for 40msec (1MB of bursty data)
Consider leaky bucket which has the output rate of 2MB/sec, the leaky bucket as you can see in Fig 5.25(b) smoothes out the data rate at 2MB/sec for 500msec.
Now consider a token bucket of capacity 250KB(Fig-5.25(c)) with token arriving at a rate allowing output at 2MB/sec- this means that the token arrival rate is 2MB/sec.
Assuming that token bucket is full when the 1MB burst arrives,
Now token bucket provides host the advantage that the host can transmit at full 25MB/sec for burst period say 'S' Sec.
This burst period is given by the expression
S = C/(M-R)
Where C=capacity of the bucket
M=Maximum allowable data rate(Here 25MB/sec)
R=Token arrival rate (Here 2MB/sec)
Now, for 'S' second, our host can transmit at full 25MB/sec.
Here S = 1MB/(23MBPS) = 10.869ms which is shown to be approximately 11ms in Fig 5.25(c)
In this 10.869 msec at 25MB/sec, total data sent= 271.739KB(Appx.)
Data remaining to b sent = 1MB-271.739KB=728.261KB
This data will now be sent evenly at the token arrival rate i.e. 2MB/sec.
And this will be sent for 728.261/2MB/sec = 364.1 msec which is shown as 364 msec in Fig 5.25(C).
So, the main difference here arises is that the token bucket has allowed our host to send some part of data at full 25MB/sec which was not the case with leaky bucket and this was when the token bucket was full before our burst data of 1MB had arrived.
Now coming to this question :
"Tokens are arriving at a rate to sustain output at rate of 10 mega bytes per second"
Means simply token arrival rate is 10MB/sec (R)
Maximum Output rate of the host is 20MB/sec.
It is said that the bucket is full.
So, when burst of 12MB data arrives,
For how much time considering that bucket was full before arrival of bursty data, our host is allowed to transmit at full 20MB/sec?
S = 1/(20-10)MBps = 0.1sec.
In 0.1 sec, data sent on network at 20MB/sec = 20*0.1=2MB.
Data remaining to be sent=12-2=10MB.
Now, this remaining 10MB data will be sent at the rate of token arrival rate i.e. 10MB/sec and this will take 1sec.
So, total time taken to send 12 MB data=1+0.1=1.1sec. (ANS)
Also, if the question had only asked that for what duration of time host is allowed to send at full capacity, then in this case answer would have been 0.1 sec.