__Correction__: The receiver window size is 16 MB not 1 MB. Also remember that "The TCP algorithm won’t set the send window larger than the advertised window which is never larger than the receive window. Until this limit is reached, the window size will keep doubling(Assuming no time outs)."

The same question is mentioned here in page 7:

http://www.eng.ucy.ac.cy/christos/courses/ECE654/Homework/Exam%202%20Solution.pdf

First of all we need to find the number of RTTs in which the file can be sent.

We start with a window size of 1 KB that doubles every RTT. You could do this your way but the mathematical way to solve this is to realize that 1 MB is 1024 times the size of 1 KB. We can then take log base 2 of 1024 which is equal to 10.

So, it would take 10 RTTs until the send window becomes 1 MB.

Now, How many RTTs does it take to send the file?

RTT |
New Send Window |

10 |
1 MB |

11 |
2 MB |

12 |
4 MB |

13 |
8 MB |

14 |
10 MB |

So, it would take 14 RTTs to send the whole file.

They've mentioned in the question that "**The time to send the file is given by the number of required RTTs
**

multiplied by the RTT",

__The round trip time (RTT) is the 2 way delay which is 2*50 = 100 ms in this
__

problem. Thus, the time it took to send the file was 14 * (100 ms) =

1.4 s.

To find the link utilization for that time, we compare the amount of

data that was sent with the amount of data that could have been

sent during that time. We sent a 10 MB file. Using a 1 Gbps link, in

1.4 seconds, 1.4 * 10^9 bits could have been sent, but we've sent only 10 MB data (10 *

2^20 * 8)

Throughput = Link utilization* Bandwidth

= (10 *2^20 * 8)/(1.4 * 10^9) * 10^9 bps = **53.3 Mbps is the correct answer.**