- Block code
In

computer science , a**block code**is a type ofchannel coding . It adds redundancy to a message so that, at the receiver, one can decode with minimal (theoretically zero) errors, provided that theinformation rate (amount of transportedinformation inbit s per sec) would not exceed thechannel capacity .The main characterization of a block code is that it is a "fixed length" channel code (unlike source coding schemes such as

Huffman coding , and unlike channel coding methods like convolutional encoding). Typically, a block code takes a "k"-digit information word, and transforms this into an "n"-digit codeword. The**block length**of such a code would be "n".Block coding was the primary type of

channel coding used in earliermobile communication systems.**Formal definition**A block code is a

code which encodes strings formed from an alphabet set $S$ into code words by encoding each letter of $S$ separately. Let $(k\_1,k\_2,ldots,k\_m)$ be a sequence ofnatural numbers each less than $|S|$. If $S=\{s\_1,s\_2,ldots,s\_n\}$ and a particular word $W$ is written as $W=s\_\{k\_1\}s\_\{k\_2\}ldots\; s\_\{k\_m\}$, then the code word corresponding to $W$, namely $C(W)$, is:$C(W)\; =\; C(s\_\{k\_1\})C(s\_\{k\_2\})ldots\; C(s\_\{k\_m\})$.

**A [n,d]**The trade-off between efficiency (large information rate) and correction capabilities can also be seen from the attempt to, given a fixed codeword length and a fixed correction capability (represented by the

Hamming distance d) maximize the total amount of codewords. "A [n,d] " is the maximum number of codewords for a given codeword length n and Hamming distance d.**Information rate**When $C$ is a binary block code, consisting of $A$ codewords of length "n" bits, then the information rate of $C$ is defined as

:$frac\{!log\_\{2\}(A)\}\{n\}$.

When if the first "k" bits of a codeword are independent information bits, then the information rate is

:$frac\{!log\_\{2\}(2^k)\}\{n\}=frac\{k\}\{n\}$.

**phere packings and lattices**Block codes are tied to the

sphere packing problem which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is,) the dimensions refer to the length of the codeword as defined above.The theory of coding uses the "N"-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so called perfect codes. There are very few of these codes.

Another item which is often overlooked is the number of neighbors a single codeword may have. Again, let's use pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly.

The result is the number of ways for noise to make the receiver choosea neighbor (hence an error) grows as well. This is a fundamental limitationof block codes, and indeed all codes. It may be harder to cause an error toa single neighbor, but the number of neighbors can be large enough so thetotal error probability actually suffers.

**References***

*

*Wikimedia Foundation.
2010.*