What we would really like is to be able to communicate with tiny probability of error and at a substantial rate. Can we do better than repetition codes? What if we add redundancy to blocks of data instead of encoding one bit at a time? We now study a simple block code that makes use of `parity check bits'.
A block code is a rule for converting a sequence of source bits , of length K, say, into a transmitted sequence of length N bits, where, in order to add redundancy, N will of course be greater than K. A neat example of a block code is the (7,4) Hamming code, which transmits N=7 bits for every K=4 source bits. (Figure 1.8.)
Figure: The (7,4) Hamming code. The sixteen codewords have the elegant property that they all differ from each other in at least three bits.
This Hamming code can be written more compactly as follows.
It is a
linear code, that is, the transmitted codeword can be obtained
from the source sequence by a linear operation,
where is the `generator matrix' of the code,
and the encoding equation (1.1) is understood to make use of a modulo 2 arithmetic. Notice that the first four transmitted bits are identical to the four source bits, and the remaining three bits are parity bits: the first is the parity of the first three source bits; the second is the parity of the last three; and the third parity bit is the parity of source bits one, three and four.