Theorem 1:-
An (m,n) encoding ∅ detects K or fewer errors if the minimal distance between encoded words is atleast K+1
Proof:-
Assume that ∅ detects all sets of K of fewer errors
With example in mathematical foundations of computer science
The code
f : Bm → Bn is used as follows.
Encoding: The sender splits the message into words of length m: w1,w2,... ,ws . Then he applies f to each of these words and produces a sequence of codewords f (w1), f (w2),... , f (ws ), which is to be transmitted.
Decoding: The receiver obtains a sequence of words of length n: w ′ 1 ,w ′ 2 ,... ,w ′ s , where w ′ i is supposed to be f (wi) but it may be different due to errors during transmission. Each w ′ i is checked for being a codeword. If it is, w ′ i = f (w), then w ′ i is decoded to w. Otherwise an error (or errors) is detected. In the case of an error-correcting code, the receiver attempts to correct w ′ i by applying a correction function c : Bn → Bn , then decodes the word c(w ′ i ). The distance d(w1,w2) between binary words w1,w2 of the same length is the number of positions in which they differ. The weight of a word w is the number of nonzero digits, which is the distance to the zero word. The distance between the sent codeword and the received word is equal to the number of errors during transmission.
Theorem Let f : Bm → Bn be a coding function. Then (i) f allows detection of k or fewer errors if and only if the minimum distance between distinct codewords is at least k + 1.
Comments
Leave a comment