"Assuming no congestion and no lost packets"
This means that the Congestion Window Size will continue to grow exponentially.
$W_{s}=min(cwnd,rwnd)$
Here the congestion window size will double after every RTT and it will continue to grow exponentially in our observation period.
"If TCP sends 1KB packets"
This is our MSS. MSS=1 KB.
Now, cwnd (congestion window size) always starts with 1 MSS(unless stated otherwise) and then grows exponentially.
So initially we have $W_{s}=min(1 KB,1 MB)= 1 KB$
Now see the below timeline. Note that sender window size will increase exponentially with our congestion window size until the receiver window size puts a limitation to it (i.e 1 MB size is reached) and sender window size will then stop growing any further.
Numbers below signify sender's window size in MSS (1 MSS = 1 KB here) and | signifies RTT:
1 | 2 | 4 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 |
Till 11 RTTs we sent 2047 KB of data.Till 19 RTTs we sent 10239 KB of data and we had to send the last window just for a single KB. So we sent 10240 KB(10 MB) of data in 20 RTTs.
Total time taken = 20 RTTs = 20*100 = 2000 ms = 2 s
Throughput = Total data sent/Total time taken to send it
=10 MB/2 s = 5 MBps
The answer given by Made Easy (14.3 Mbps) is wrong.
Check page 2 here : https://spcl.inf.ethz.ch/Teaching/2014-osnet/assignments/solution12.pdf
https://cseweb.ucsd.edu/classes/sp16/cse123-a/homeworks/hw4_sol.pdf