The Gateway to Computer Science Excellence
First time here? Checkout the FAQ!
x
+1 vote
127 views
Consider TCP implements an extension that allows window size much larger than 64KB. Suppose you are using this extended TCP over a 1Gbps link with a latency of 50ms to transfer a 10 MB file and TCP receive window is 1MB. If TCP sends 1KB packet and time to send the file is given by the number of required RTTs multiplied by link latency, then effective throughput of transfer is x Mbps. Find x. (Assuming no congestion and no lost packets)
asked in Computer Networks by (85 points) | 127 views
0
Can you post the solution provided by them ?

got 800 as answer
0
They have given answer as 14.3Mbps
0
Could anyone provide the sol pls.
0

@Arjun sir, please help with this question.

0
I'm surprised Made Easy asked such a good question. It's rare indeed. Check the solution below.

1 Answer

+1 vote

Correction: The receiver window size is 16 MB not 1 MB. Also remember that "The TCP algorithm won’t set the send window larger than the advertised window which is never larger than the receive window. Until this limit is reached, the window size will keep doubling(Assuming no time outs)."

The same question is mentioned here in page 7:

 http://www.eng.ucy.ac.cy/christos/courses/ECE654/Homework/Exam%202%20Solution.pdf

First of all we need to find the number of RTTs in which the file can be sent.

We start with a window size of 1 KB that doubles every RTT. You could do this your way but the mathematical way to solve this is to realize that 1 MB is 1024 times the size of 1 KB. We can then take log base 2 of 1024 which is equal to 10.

So, it would take 10 RTTs until the send window becomes 1 MB.

Now, How many RTTs does it take to send the file?

RTT New Send Window
10 1 MB
11 2 MB
12 4 MB
13 8 MB
14 10 MB

So, it would take 14 RTTs to send the whole file.

They've mentioned in the question that "The time to send the file is given by the number of required RTTs
multiplied by the RTT
",

The round trip time (RTT) is the 2 way delay which is 2*50 = 100 ms in this
problem. Thus, the time it took to send the file was 14 * (100 ms) =
1.4 s.

To find the link utilization for that time, we compare the amount of
data that was sent with the amount of data that could have been
sent during that time. We sent a 10 MB file. Using a 1 Gbps link, in
1.4 seconds, 1.4 * 10^9 bits could have been sent, but we've sent only 10 MB data (10 *
2^20 * 8)

Throughput = Link utilization* Bandwidth

                   = (10 *2^20 * 8)/(1.4 * 10^9) * 10^9 bps = 53.3 Mbps is the correct answer.

answered by Active (1.5k points)
edited by
0
Thank you sir. But I have a doubt. First thing is if it is given that there is no congestion in the network then why we are applying congestion control policies here. Second, if receiver window size is 1MB, then we will apply congestion avoidance. Is it correct sir.

Related questions

0 votes
0 answers
7
asked Oct 17, 2018 in Computer Networks by monty Active (1.1k points) | 74 views
Quick search syntax
tags tag:apple
author user:martin
title title:apple
content content:apple
exclude -tag:apple
force match +apple
views views:100
score score:10
answers answers:2
is accepted isaccepted:true
is closed isclosed:true
47,935 questions
52,336 answers
182,393 comments
67,819 users