For comparison of error detection/correction capabilities, two properties are prevalent: the number of random errors (how many one-bit errors the code is guaranteed to handle, independent of where in the message the errors are introduced) and the size of the largest
burst error (in a burst, any non-zero number of distinct bits can be changed, but they have to be closer together than the burst size).

Checksums that are b bits wide are only guaranteed to detect a single random bit error (recall that two well-placed errors can go undetected) and a burst of size b: As long as the error changes only bits in a single word, it will be detected, as it cannot cancel itself out. The same still applies when the boundaries of the burst are no longer limited to coincide with the word boundaries, resulting in an upper part of one number and a lower part of an adjoining number being modified. As long as the two parts do not overlap, no subset of the bits to be modified can cancel each other out. A burst of size b+1 could go undetected, if only its border bits would be flipped (this again is the undetectable 2-bit modification).

The Internet protocol suite (such as IP, TCP, and UDP) use a variant of this basic checksum, the one’s complement sum, which wraps the carry from the most significant digit around to the least significant digit. The influence on the detection capabilities is marginal, and the weaknesses of this checksum are widely discusses; for example, a paper by Stone and Partridge provides a good experimental analysis.

Schreibe einen Kommentar