Dark Mode

22,370 views

67 votes

Assume that the bandwidth for a $\text{TCP}$ connection is $1048560$ bits/sec. Let $\alpha$ be the value of RTT in milliseconds (rounded off to the nearest integer) after which the $\text{TCP}$ window scale option is needed. Let $\beta$ be the maximum possible window size with window scale option. Then the values of $\alpha$ and $\beta$ are

- $63$ milliseconds, $65535$ $\times $2$^{14}$
- $63$ milliseconds, $65535$ $\times $2$^{16}$
- $500$ milliseconds, $65535$ $\times $2$^{14}$
- $500$ milliseconds, $65535$ $\times $2$^{16}$

108 votes

Best answer

In TCP when the **bandwidth-delay product** increases beyond $64\;\textsf{K}$ receiver window scaling is needed.

The bandwidth-delay product is the maximum amount of data on the network circuit at any time and is measured as RTT * Bandwidth. This is not the time for sending data rather just the time for sending data without acknowledgement.

So, here, we have bandwidth delay product $= (1048560 / 8) B \ast \alpha = 64\;\textsf{K}$

$\alpha = (64\;\textsf{K} \ast 8 ) / 1048560 = 0.5\;\text{s} = 500$ milliseconds.

When window scaling happens, a $14$ bit shift count is used in $\text{TCP}$ header. So, the maximum possible window size gets increased from $2^{16}-1$ to $(2^{16}-1) \ast 2^{14}$ or from $65535$ to $65535 \ast 2^{14}$

0

1

@Digvijay Pandey Section 2.3 in https://tools.ietf.org/html/rfc1323#page-10 clearly explains why 14 bits shift count is used in TCP header.

TCP determines if a data segment is "old" or "new" by testing whether its sequence number is within 2**31 bytes of the left edge of the window, and if it is not, discarding the data as "old". To insure that new data is never mistakenly considered old and vice- versa, the left edge of the sender's window has to be at most 2**31 away from the right edge of the receiver's window. Similarly with the sender's right edge and receiver's left edge. Since the right and left edges of either the sender's or receiver's window differ by the window size, and since the sender and receiver windows can be out of phase by at most the window size, the above constraints imply that 2 * the max window size must be less than 2**31, or max window.

Since the max window is 2**S (where S is the scaling shift count) times at most 2**16 - 1 (the maximum unscaled window), the maximum window is guaranteed to be < 2*30 if S <= 14. Thus, the shift count must be limited to 14 (which allows windows of 2**30 = 1 Gbyte). If a Window Scale option is received with a shift.cnt value exceeding 14, the TCP should log the error but use 14 instead of the specified value.

0

22 votes

answer is C.

because TCP window scale option is needed when size increases more than 65535 B. it means alpha (RTT) should be the time taken to send 65535 B to the receiver.

Time to send 65535 B = 65535 * 8/1048560 *1000 = 500 ms.

**so alpha will be 500. **

maximum window size with window scale option is possible in TCP is 1073725440 B which is **65535*2^14** .

**PLEASE CHECK CALCULATION ONCE:**

because TCP window scale option is needed when size increases more than 65535 B. it means alpha (RTT) should be the time taken to send 65535 B to the receiver.

**Time to send 65535 B = 65535 * 8/1048560 *1000 = 500 ms. **

IT SHOULD BE: // **Time to send 65535 B = 65535 * 8*1000/1048560 = 500 ms. **

**so alpha will be 500.**

maximum window size with window scale option is possible in TCP is 1073725440 B which is **65535*2^14** .

0

20 votes

The **TCP window scale option** is an option to increase the receiver window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes.

65,535 bytes = 64KB = $2^{16}$ B

`The scaling option allows us to increase the window size from 64KB to 1 GB!`

- When is scaling used?

When the bandwidth-delay product exceeds the value of 64K or $2^{16}$, we use scaling.

Bandwidth **must **be in bytes/sec and delay (RTT) **must **be in ms.

- What happens when we scale?

Window size increases from 64KB to 1GB, ie, from $2^{16}$B to $2^{30}$B

Now, coming to the question:

Calculating α

131070 * RTT = $2^{16}$

=> 131070 * RTT = 65536

=> RTT = 500 ms

Calculating β

Window size would be increased from $2^{16}$B to $2^{30}$B

i.e. from 65536 to 65536 * $2^{14}$B

**Option C**

0

I think, the maximum window size of receiver is $2^{16}$, so if the bandwidth is maximum $2^{16}$ then receiver will be filled in one go/ one second. If bandwidth is more than $2^{16}$ we can transfer more than the size that a receiver can advertise. So we need to add extra bits so that sender can show more buffer size that it can receive. And the extra bits are available in the option field of the tcp header.

Please correct me if I'm wrong somewhere.

Please correct me if I'm wrong somewhere.

0

7 votes

Lets talk about concept first..

Basically We are not utilizing the given bandwidth at its fullest. Sticking to the question, it causes due to the delay caused in sending the data.

(You can consider the example of satellite link , It causes much delay for acknowledgement to arrived after sending data equal to current window size)

One of the solution is we can consider upgrading window size that will pack the data , utilizing maximum of the given bandwidth.

The units we consider for data is in Bytes and delay in "ms"

Lets have a look at our Question.

Given,

B = (1048560/8) B/s

RTT ( alpha ) = X ms ( say)

Current ( default) window size, RWIN = 65535 B

( RWIN = Reciver window )

Using Bandwidth * Delay product ,

B * X = 65535 ,

X = 500 ms.

To use the given Bandwidth to ita fullest , we can scale our window upto 1GB ( fixed standard)

i.e with scaling factor of 2^14 B

(i.e Shifting 14 bits to left )

So, scaled window size becomes,

65535 * 2^ 14.

(2^16 * 2^ 14 = 2^30 , i.e 1GB )

https://www.speedguide.net/faq/what-is-the-bandwidth-delay-product-185

https://networklessons.com/cisco/ccnp-route/bandwidth-delay-product/