Wed, 04/11/2012 - 11:04 — pchakka
## Decoding AEL Codes

### Background

## Decoding AEL code

Suppose we have a code which involves using some alphabets (symbols which are used to represent a code) then how can we generate a family of codes from this code with the same rate and distance but over a larger alphabet set.To generate this family of codes **ABNNR** and **AEL** codes are used which differ in . In **ABNNR** is a repetition code whereas in **AEL** codes is any given code.

**AEL** codes are a generalization of the **ABNNR** encoding construction for a code which use expander graph. Alon,Edmonds and Luby made an observation which changed the approach of using expander graphs to encode a message. **ABNNR** encodes the message by assigning weights to the edges of the graph

Decoding is done in the reverse way as the encoding process .i.e from the output message (the final code word with large alphabet set) we traverse backward on the edges of the graph and form a candidate set of codeword for each vertex on the left side vertex set of the bipartite graph.We then use the decoding algorithm for to get the initial vertices which are of length i.e the left vertices and then apply the decoding algorithm for to get the original message back.

Summarizing the above:

**Step:1**Traverse along the edges from the right vertex to its neighbors.

**Step:2**Using the edge weights form the codeword for each of vertex on the left side.

**Step:3**Apply decoding algorithm of to get the initial left vertices.

**Step:4**Apply decoding algorithm of to get the initial message sent.

The theorem below tries to prove that there exists codes with alphabets and satisfies other parameters as mentioned below such that with fraction of errors it can be decodable in linear time.

**Theorem 5:**(Guruswami and Indyk) For all such values of which satisfy the condition , there is a such that for all such values of there can exist - ary codes of rate with relative distance that are uniquely decodable from fraction of errors in linear time.

**Proof:** As mentioned above, following the approach of reversing the method of encoding we can arrive at our messages.

The following assumptions are made in the proof of the above theorem:

- Say the final output code word formed has errors associated with it.

- Say we have a constant time algorithm to decode .

- That we have a linear time algorithm to decode if there are errors.

**Sketch:** Take the bipartite graph used in the encoding process and follow the edges from the right vertices to the left vertices as there are errors in the codeword, these errors are propagated to the left vertices as we traverse along the edges. We can uniquely decode a vertex on the left side if the number of errors is or the number of edges coming into the vertex from the right vertex sets are . i.e let us consider be the number of vertices which have errors and the number of vertices on the right side of the bipartite graph that are uncorrupted i.e (total number of errors + correct bits of the output = ).

As the graph is a graph then it implies it has atleast edges between and. But every vertex in has atleast errors or we can say that each vertex in has atmost neighbors in . We can then show that:

number of edges from to which on simplification gives:

choosing : (the elements of the left vertex set has a distance and so we can get a correct code if the number of errors is less than but since this is a graph or ). Hence substituting for ,

we get :

Therefore we can decode from a fraction of errors in linear time by choosing appropriately.

- Login to post comments