To provide error correction for 1000-bit blocks, that 10 check bits are needed in hamming code single bt error correction. Thus, a megabit of data would require 10,000 check bits. To merely detect a block with a single 1-bit error, one parity bit per block will suffice. Once every 1000 blocks, a block will be found to be in error and an extra block (1001 bits) will have to be transmitted to repair the error. The total overhead for the error detection and retransmission method is only 2001 bits per megabit of data, versus 10,000 bits for a Hamming code.
Can anybody please explain me how it is 2001 bits if we do retrainmission of next block if error occurs, it should be 2000 bits ( block in error(1000) + retransmiited block(1000) ).