DNA code construction

                                                                                                                                                                                                                          

Use of coding theory in the construction of DNA codes for DNA computers

Introduction

DNA code construction is applicable to the field of DNA – based computation.
DNA sequences are known to appear in the form of double helices in eukaryotes, in which one strand of a nucleotide is chemically attached to its complementary strand by means of sugar and phosphate linkages. For the purpose of this entry, we shall focus on only oligonucleotides.

DNA computing involves allowing these oligonucleotide strands to undergo hybridization i.e. form long DNA molecules.

The field of DNA computing was established in Leonard M. Adelman’s seminal paper Molecular computation of solutions to combinatorial problem. His work is significant for a number of reasons:

  • It shows how one could use the highly parallel nature of computation performed by DNA to solve problems that are difficult or almost impossible to solve using the traditional methods.
  • It's an example of computation at a molecular level, on the lines of nanocomputing, and this potentially is a major advantage as far as the information density on storage media is considered, which can never be reached by the semiconductor industry.
  • It demonstrates unique aspects of DNA as a data structure.

This parallelism in DNA computing can be exploited in solving many computational problems on an enormously large scale such as cell-based computational systems for cancer diagnostics and treatment, and ultra-high density storage media [1].

DNA computing requires that the self-assembly of the oligonucleotide strands happen in such a way that hybridization should occur in a manner compatible with the goals of computation.

This selection of codewords (sequences of DNA oligonucleotides) is a major hurdle in itself due to the phenomenon of secondary structure formation (in which DNA strands tend to fold onto themselves during hybridization and hence rendering themselves useless in further computations. This is also known as self-hybridization).

The Nussinov - Jacobson [3] algorithm is used to predict secondary structures and also to identify certain design criteria that reduce the possibility of secondary structure formation in a codeword. In essence this algorithm shows how the presence of a cyclic structure in a DNA code reduces the complexity of the problem of testing the codewords for secondary structures.

Novel constructions of such codes include using cyclic reversible extended Goppa codes, generalized Hadamard matrices, and a binary approach. Before diving into these constructions, we shall revisit certain fundamental genetic terminology.

The motivation for the theorems presented in this article, is that they concur with the Nussinov - Jacobson algorithm, in that the existence of cyclic structure helps in reducing complexity and thus prevents secondary structure formation. i.e. these algorithms satisfy some or all the design requirements for DNA oligonucleotides at the time of hybridization (which is the core of the DNA computing process) and hence do not suffer from the problems of self - hybridization.

Background

A DNA code is simply a set of sequences over the alphabet $ \mathcal{Q} = \{ \mathit{A}, \mathit{T}, \mathit{C}, \mathit{G} \} $.

Each purine base is the Watson-Crick complement of a unique pyrimidine base (and vice versa) – adenine and thymine form a complimentary pair, as do guanine and cytosine. This pairing can be described as follows – $  \bar{A} = T, \bar{T} = A, \bar{C} = G, \bar{G} = C  $.

Such pairing is chemically very stable and strong. However, pairing of mismatching bases does occur at times due to biological mutations.

Most of the focus on DNA coding has been on constructing large sets of DNA codewords with prescribed minimum distance properties.
For this purpose let us lay down the required groundwork to proceed further.

Let $  \mathit{q} = \mathit{q}_1\mathit{q}_2 .... \mathit{q}_n  $ be a word of length $ \mathit{n} $ over the alphabet Q. For $ 1 \leqslant i \leqslant j \leqslant n $, we will use the notation $ \mathit{q}_{[i,j]} $ to denote the subsequence $ \mathit{q}_{i} \mathit{q}_{i+1} ... \mathit{q}_{j} $. Furthermore, the sequence obtained by reversing $ \mathit{q} $ will be denoted as $ \mathit{q}_R $. The Watson-Crick complement, or the reverse-compliment of q, is defined to be $ \mathit{q}^{RC} = \mathit{\bar{q}_n}\mathit{\bar{q}_n_-_1} ... \mathit{\bar{q}_1} $, where $ \mathit{\bar{q}}_i $ denotes the Watson-Crick complement base pair of $ \mathit{q}_i $.

For any pair of length-$ \mathit{n} $ words $ \mathit{p} $ and $ \mathit{q} $ over $ \mathcal{Q} $, the Hamming distance $ \mathit{d}_H(\mathit{p},\mathit{q}) $ is the number of positions $ \mathit{i} $ at which $ \mathit{p}_i \neq \mathit{q}_i $. Further, define reverse-Hamming distance as $ \mathit{d}_H ^ R(\mathit{p},\mathit{q}) = \mathit{d}_H(\mathit{p},\mathit{q}^R) $. Similarly, reverse-compliment Hamming distance is $ \mathit{d}_H^R^C(\mathit{p},\mathit{q}) = \mathit{d}_H(\mathit{p},\mathit{q}^R^C) $. (where $ RC $ stands for reverse compliment)

Another important code design consideration linked to the process of oligonucleotide hybridization pertains to the GC content of sequences in a DNA code. The GC-content, $ \mathit{w}_G_C(\mathit{q}) $, of a DNA sequence $  \mathit{q} = \mathit{q}_1\mathit{q}_2 .... \mathit{q}_n  $ is defined to be the number of indices i such that $ \mathit{q}_i \in \{G, C\} $. A DNA code in which all codewords have the same GC-content, w, is called a constant GC-content code.

A generalized Hadamard matrix $ \mathit{H} \equiv \mathit{H}(n, \mathbb{C}_m $) is an n $ \times $ n square matrix with entries taken from the set of mth roots of unity, $ \mathbb{C}_m $= $ \{e^-2\pi $ $ \mathit{i} \mathit{l}/\mathit{m} $, $ \mathit{l} $ = 0, ..., $ \mathit{m} - 1\} $, that satisfies $ \mathit{H}\mathit{H}^* $ = $ \mathit{n}\mathit{I} $. Here $ \mathit{I} $ denotes the identity matrix of order $ \mathit{n} $, while * stands for complex-congugation. We will only concern ourselves with the case $ \mathit{m} = \mathit{p} $ for some prime $ \mathit{p} $. A necessary condition for the existence of generalized Hadamard matrices $ \mathit{H}( \mathit{n}, \mathbb{C}_p) $ is that $ \mathit{p}|\mathit{n} $. The exponent matrix, $ \mathit{E} $ $ (\mathit{n},\mathbb{Z}_p) $, of $ \mathit{H}(\mathit{n},\mathbb{C}_p) $ is the $ \mathit{n}\times\mathit{n} $ matrix with the entries in $ \mathit{Z}_p = \{0, 1, 2, ..., \mathit{p} - 1\} $, is obtained by replacing each entry $ (e^{-2\pi\mathit{i}})^l $ in $ \mathit{H}(\mathit{n},\mathbb{C}_p) $ by the exponent $ \mathit{l} $.

The elements of the Hadamard exponent matrix lie in the Galois field $ \mathit{GF(p)} $, and its row vectors constitute the codewords of what shall be called a generalized Hadamard code.

Here, the elements of $ \mathit{E} $ lie in the Galois field $ \mathit{GF(p)} $.

By definition, a generalized Hadamard matrix $ \mathit{H} $ in its standard form has only 0s in its first row and column. The $ (\mathit{n} - 1) \times \mathit{n} - 1) $ square matrix formed by the remaining entries of $ \mathit{H} $ is called the core of $ \mathit{H} $, and the corresponding submatrix of the exponent matrix $ \mathit{E} $ is called the core of construction. Thus, by omission of the all-zero first column cyclic generalized Hadamard codes are possible,
whose codewords are the row vectors of the punctured matrix.

Also, the rows of such an exponent matrix satisfy the following two properties: (i) in each of the nonzero rows of the exponent matrix, each element of $ \mathbb{Z}_p $ appears a constant number, $ \mathit{n}/\mathit{p} $, of times; and (ii) the Hamming distance between any two rows is $ \mathit{n}(\mathit{p} - 1)/\mathit{p} $. [2]

Property U

Let $ \mathit{C_p} = {1, x, x2,..., x p-I} $ be the cyclic group generated by $ \mathit{x} $, where $ x = exp(2\pi j/p) $ is a complex primitive $ p $th root of unity, and $ p $ > $ 2 $ is a fixed prime. Further, let $ \mathit{A} = (x^{a_i}) $, $ \mathit{B} = (x^{b_i}) $ denote arbitrary vectors over $ \mathit{C_p} $ which are of length $ \mathit{N} = pt $, where $ \mathit{t} $ is a positive integer. Define the collection of differences between exponents $ \mathit{Q} = {\mathit{a_i} - \mathit{b_i} \mod \mathit{p} : i = 1, 2,..., N} $, where $ \mathit{n_q} $ is the multiplicity of element $ \mathit{q} $ of $ \mathit{GF(p)} $ which appears in $ \mathit{Q} $. [2]

Vector $ \mathit{Q} $ is said to satisfy Property U iff each element $ \mathit{q} $ of $ \mathit{GF(p)} $ appears in $ \mathit{Q} $ exactly t times ($ \mathit{n_q} = t, q =0, 1, ..., p-1 $)

The following lemma is of fundamental importance in constructing generalized Hadamard codes.

Lemma. Orthogonality of vectors over $ \mathit{C_p} $ - For fixed primes $ \mathit{p} $, arbitrary vectors $ \mathit{A}, \mathit{B} $ of length $ \mathit{N} = pt $, whose elements are from $ \mathit{C_p} $, are orthogonal if the vector $ \mathit{Q} $ satisfies Property U, where $ \mathit{Q} $ is the collection of $ mod \mathit{p} $ differences between the Hadamard exponents associated with $ \mathit{A}, \mathit{B} $.

M sequences

Let $  \mathit{V} $ be an arbitrary vector of length $ \mathit{N} $ whose elements are in the finite field $  \mathit{GF(p)} $, where $ \mathit{p} $ is a prime. Let the elements of vector $ \mathit{V} $ constitute the first period of an infinite sequence $ \mathit{a(V)} $ which is periodic of period $ \mathit{N} $. If $ \mathit{N} $ is the smallest period for conceiving a subsequence, the sequence is called an M-sequence, or a sequence of maximal least period obtained by cycling $ \mathit{N} $ elements. If, when the elements of the ordered set $ \mathit{V} $ are permuted arbitrarily to yield $ \mathit{V^*} $, the sequence $ \mathit{a(V^*)} $ is an M-sequence, the sequence $ \mathit{a(V)} $ is called M-invariant.
The theorems that follow present conditions that ensure invariance in an M sequence. In conjunction with a certain uniformity property of
polynomial coeffecients, these conditions yield a simple method by which complex Hadamard matrices with cyclic core can be constructed.

The goal as outlined at the head of this article is to find cyclic matrix $ \mathit{E} = \mathit{E_c} $ whose elements are in Galois field $ \mathit{GF(p)} $ and whose dimension is $ \mathit{N = p^n - 1} $. The rows of $ \mathit{E} $ will be the nonzero codewords of a linear cyclic code $ K $, if and only if there is polynomial $ \mathit{g(x)} $ with coefficients in $ \mathit{GF(p)} $, which is a proper divisor of $ \mathit{x^N - 1} $ and which generates $ K $.
In order to have$ N $ nonzero codewords, $ \mathit{g(x)} $ must be of degree $ \mathit{N - n} $. Further, in order to generate a cyclic Hadamard core, the vector (of coefficients of) $ \mathit{g(x)} $ when operated upon with the cyclic shift operation must be of period $ \mathit{N} $, and the vector difference of two arbitrary rows of $ \mathit{E} $ (augmented with zero) must satisfy the uniformity condition of Butson [4], previously referred to as Property U.
One necessary condition for $ N $-peridiocity is that $ \mathit{x^N - 1} = \mathit{g(x)h(x)} $, where $ \mathit{h(x)} $ is monic irreducible over $ \mathit{GF(p)} $.
The approach here is to replace the last requirement with the condition that the coefficients of the vector $ [0,\mathit{g(x)}] $ be uniformly distributed over $ \mathit{GF(p)} $, each residue $ 0, 1,... ,p - 1 $ appears the same number of times (Property U). This heuristic approach has succeeded for all cases tried, and a proof that it always produces a cyclic core is given below.

1. Code construction using complex Hadamard matrices

Construction algorithm - Consider all monic irreducible polynomials $ \mathit{h(x)} $ over $ \mathit{GF(p)} $ which are of degree $ \mathit{n} $ , and which permit a suitable companion $ \mathit{g(x)} $ of degree $ \mathit{N-n} $ such that $ \mathit{g(x)h(x)} = \mathit{x^N} - 1 $, where also vector $ [0, \mathit{g(x)}] $ satisfies Property U. This requires only a simple computer algorithm for long division over $ \mathit{GF(p)} $. Since $ \mathit{h(x)} | \mathit{x^N} -1 $, the ideal generated by $ \mathit{g(x)} $ , mod $ \mathit{x^N} - 1 $, will be a cyclic code $ \mathit{K} $. Moreover, Property U guarantees the nonzero codewords form a cyclic matrix, each row being of period $ \mathit{N} $ under cyclic permutation, which serves as a cyclic core
for Hadamard matrix $ \mathit{H(p, pn)} $.
As an example, a cyclic core for $ \mathit{H(3, 9)} $ results from the companions $ \mathit{h(x)} = \mathit{x^2} + \mathit{x} + 2 $ and $ \mathit{g(x)} = \mathit{x^6} + 2\mathit{x^5} + 2\mathit{x^4} + 2\mathit{x^2} + \mathit{x}+ 1 $. The coefficients of $ \mathit{g} $ indicate that $ {0, 1, 6} $ is the relative difference set, $ \mod 8 $.

Theorem: Let $ \mathit{p} $ be a prime and $ \mathit{N} + 1 = \mathit{pn} $, with $ \mathit{g}(\mathit{x}) $ a monic polynomial of degree $ \mathit{N} - \mathit{n} $ whose extended vector of coefficients $ \mathit{C} = [\mathit{c}_0, \mathit{c}_1, ..., \mathit{c}_{N-1}] $ are elements of $ \mathit{GF}(\mathit{p}) $. The conditions are as follows:

(1) vector $ \mathit{C} = [\mathit{c}_0, \mathit{c}_1, ..., \mathit{c}_{N-1}] $ satisfies the property U explained above,

(2) $ \mathit{g(x)h(x) = x N - 1} $, where $ \mathit{h(x)} $ is a monic irreducible polynomial of degree n, guarantee the existence of a p-ary, linear cyclic code $ \mathit{\bar{K}} $: of blocksize $ \mathit{N} $, such that the augmented code $ \mathit{K} = [0, \mathit{\bar{K}}] $ is the Hadamard exponent, for Hadamard matrix $ \mathit{H(p,p_n) = x K} $, with $ \mathit{x = e^{2 \pi i/p}} $, where the core of $ \mathit{H} $ is cyclic matrix.

Proof:

First, we note that since $ \mathit{g(x)} $ is monic, it divides $ \mathit{x^{N - 1}} $, and has degree = $ \mathit{N - n} $. Now, we need to show that the matrix $ \mathit{E_c} $ whose rows are the nonzero codewords, constitutes a cyclic core for some complex Hadamard matrix $ \mathit{H} $.

Given: we know that $ \mathit{C} $ satisfies property U. Hence, all of the nonzero residues of $ \mathit{GF(p)} $ lie in C. By cycling through $ \mathit{C} $, we get the desired exponent matrix $ \mathit{E_c} $ where we can get every codeword in $ \mathit{E_c} $ by cycling the first codeword. (This is because the sequence obtained by cycling through $ \mathit{C} $ is an M-invariant sequence.)

We also see that augmentation of each codeword of $ \mathit{E_c} $ by adding a leading zero element produces a vector which satisfies Property U. Also, since the code is linear, the $ \mod p $ vector difference of two arbitrary codewords is also a codeword and thus satisfy Property U. Therefore, the row vectors of the augmented code $ \mathit{K} $ form a Hadamard exponent. Thus, $ \mathit{x K} $ is the standard form of some complex Hadamard matrix $ \mathit{H} $.

Thus from the above property, we see that the core of $ \mathit{E} $ is a circulant matrix consisting of all the $ N = \mathit{p}^k - 1 $ cyclic shifts of its first row. Such a core is called a cyclic core where in each element of $ \mathbb{Z}_p $ appears in each row of $ \mathit{E} $ exactly $ (N + 1)/p = \mathit{p}^k^-^1 $ times, and the Hamming distance between any two rows is exactly $ (N + 1)(p - 1)/p = \mathit{(p - 1)}\mathit{p}^k^-^1 $. The $ \mathit{N} $ rows of the core $ \mathit{E} $ form a constant-composition code - one consisting of N cyclic shifts of some length N over the set $ \mathbb{Z}_p $. Hamming distance between any two codewords in $ \mathbb{Z}_p $ is $ \mathit{(p - 1)}\mathit{p}^k^-^1 $.

Let $ \mathit{N} = \mathit{p}^\mathit{k} - 1 $ for $ \mathit{p} $ prime and $ \mathit{k} \in \mathbb{Z}^+ $. Let $ \mathit{g}(\mathit{x}) = \mathit{c}_0 + \mathit{c}_1\mathit{x}+\mathit{c}_2\mathit{x}^2+...+\mathit{c}_N_-_k \mathit{x}^N^-^k $ be a monic polynomial over $ \mathbb{Z}_p $, of degree N - k such that $ \mathit{g}(\mathit{x})\mathit{h}(\mathit{x}) = \mathit{x}^N - 1 $ over $ \mathbb{Z}_p $, for some monic irreducible polynomial $ \mathit{h}(\mathit{x}) \in \mathbb{Z}_p[\mathit{x}] $. Suppose that the vector $ \mathit({c}_0, \mathit{c}_1, ...., \mathit{c}_N_-_k, \mathit{c}_N_-_k_+_1, ..., \mathit{c}_N_-_1) $, with $ \mathit{c}_i = 0 $ for (N - k) < i < N, has the property that it contains each element of $ \mathbb{Z}_p $ the same number of times. Then, the $ \mathit{N} $ cyclic shifts of the vector $ \mathit{g} = (\mathit{c}_0, \mathit{c}_1, ..., \mathit{c}_N_-_1) $ form the core of the exponent matrix of some Hadamard matrix .

This observation can be inferred from the theorem as explained above. (For more detailed reading, the reader is referred to the paper by Heng and Cooke [2].)

DNA codes with constant GC-content can obviously be constructed from constant-composition codes (A constant composition code over a k-ary alphabet has the property that the numbers of occurrences of the k symbols within a codeword is the same for each codeword) over $ \mathbb{Z}_p $ by mapping the symbols of $ \mathbb{Z}_p $ to the symbols of the DNA alphabet, $ \mathcal{Q} = \{ \mathit{A}, \mathit{T}, \mathit{C}, \mathit{G} \} $. For example, using cyclic constant composition code of length $ \mathit{3}^k - 1 $ over $ \mathbb{Z}_3 $ guaranteed by the theorem proved above and the resulting property, and using the mapping that takes $ 0 $ to $ \mathit{A} $, $ 1 $ to $ \mathit{T} $ and $ 2 $ to $ \mathit{G} $, we obtain a DNA code $ \mathcal{D} $ with $ \mathit{3}^k - 1 $ and a GC-content of $ \mathit{3}^{k - 1} $. Clearly $ \mathit{d_H} = 2. \mathit{3}^{k - 1} $ and in fact since $ \mathit{\bar{G}}=\mathit{C} $ and no codeword in $ \mathcal{D} $ contains no symbol $ \mathit{C} $, we also have $ \mathit{d}_H^R^C(\mathcal{D}) \geq 3^{k - 1} $.
This is summarized in the following corollary[2]

Corollary. For any $ \mathit{k} \in \mathbb{Z}^+ $, there exists DNA codes $ \mathbb{D} $ with $ \mathit{3}^k - 1 $ codewords of lenth $ \mathit{3}^k - 1 $, constant GC-content $ \mathit{3}^k^-^1 $, $ \mathit{d}_H^{RC}(\mathbb{D}) \geqslant \mathit{3}^k^-^1 $ and in which every codeword is a cyclic shift of a fixed generator codeword $ \mathit{g} $.

Each of the following vectors generates a cyclic core of a Hadamard matrix $ \mathit{H(p,p^n)} $ (where $ \mathit{N} + 1 = \mathit{p^n} $, and $ \mathit{n} = 3 $ in this example). [2]:

$  \mathit{g^{(1)}} $ = $ (22201221202001110211210200) $;

$  \mathit{g^{(2)}} $ = $ (20212210222001012112011100) $.

Where, $ \mathit{g(x)} = a_0 + a_1x + .... + a_nx^n $.

Thus, we see how DNA codes can be obtained from such generators by mapping $ {0,1,2} $ onto $ {A,T,G} $. The actual choice of mapping plays a major role in secondary structure formations in the codewords.

We see that all such mappings yield codes with essentially the same parameters. However the actual choice of mapping has a strong influence on the secondary structure of the codewords. For example, the codeword illustrated was obtained from $ \mathit{g^{(1)}} $ via the mapping $ 0 - A; 1 - T; 2 - G  $, while the codeword $  \mathit{g^{(2)}} $ was obtained from the same generator $  \mathit{g^{(1)}} $ via the mapping $ 0 - G; 1 - T; 2 - A $.

2. Code construction via a Binary Mapping.

Perhaps a simpler approach to building/designing DNA codewords is by having a binary mapping by looking at the design problem as that of constructing the codewords as binary codes. i.e. map the DNA codeword alphabet $ \mathcal{Q} $ onto the set of 2-bit length binary words as shown: $ \mathit{A} $ -> $ 00 $, $ \mathit{T} $ -> $ 01 $, $ \mathit{C} $ -> $ 10 $, $ \mathit{G} $ ->$ 11 $.

As we can see, the first bit of a binary image clearly determines which complimentary pair it belongs to.

Let $ \mathit{q} $ be a DNA sequence. The sequence $ \mathit{b(q)} $ obtained by applying the mapping given above to $ \mathit{q} $, is called the binary image of $ \mathit{q} $.

Now, let $ \mathit{b(q)} $ = $ \mathit{b}_0\mathit{b}_1\mathit{b}_2...\mathit{b}_{2n-1} $.

Now, let the subsequence $ \mathit{e(q)} $ = $ \mathit{b}_0\mathit{b}_2...\mathit{b}_{2n-2} $ be called the even subsequence of $ \mathit{b(q)} $, and $ \mathit{o(q)} $ = $ \mathit{b}_1\mathit{b}_3\mathit{b}_5...\mathit{b}_{2n-1} $ be called the odd subsequence of $ \mathit{b(q)} $.

Thus, for example, for $ \mathit{q} $ = $ ACGTCC $, then, $ \mathit{b(q)} $ = $ 001011011010 $.

$ \mathit{e(q)} $ will then be = $ 011011 $ and $ \mathit{o(q)} $ = $ 001100 $.

Let us define an even component as $ \mathcal{E}(\mathcal{C}) = \{ e(x) : x \in \mathcal{C}\} $, and an odd component as $ \mathcal{O} (\mathcal{C}) = \{ o(x) : x \in \mathcal{C}\} $.

From this choice of binary mapping, the GC-content of DNA sequence $ \mathit{q} $ = Hamming weight of $ \mathit{e(q)} $.

Hence, a DNA code $ \mathcal{C} $ is a constant GC-content codeword if and only if its even component $ \mathcal{E}(\mathcal{C}) $ is a constant-weight code.

Let $ \mathcal{B} $ be a binary code consisting of $ M $ codewords of length $ \mathit{n} $ and minimum distance $ \mathit{d_{min}} $, such that $ \mathit{c} \in \mathcal{B} $ implies that $ \mathit{\bar{c}} \in \mathcal{B} $.
For $ \mathit{w} > 0 $, consider the constant-weight subcode $ \mathcal{B_{\mathit{w}}} = \{u \in \mathcal{B} : \mathit{w_H}(u) = \mathit{w} \} $, where $ \mathit{w_H(.)} $ denotes Hamming weight. Choose $ \mathit{w} > 0 $ such that $ \mathit{n} \geq \mathit{2w} + \lceil \mathit{d_{min}}/2 \rceil $ , and consider a DNA code, $ \mathcal{C}_w $, with the following choice for its even and odd components:

$ \mathcal{E} = \{a \bar{b} : a, b \in \mathcal{B}_w \} $, $ \mathcal{O} = \{ab^{RC} : a, b \in \mathcal{B}, a $ <$ _{lex} $ $ b \} $,
where < $ _{lex} $ denotes lexicographic ordering. The $ a $ < $ _{lex} b $ in the definition of $ \mathcal{O} $ ensures that if $ ab^{RC} \in \mathcal{O} $, then $ ba^{RC} \notin \mathcal{O} $, so that distinct codewords in $ \mathcal{O} $ cannot be reverse-compliments of each other.
The code $ \mathcal{E}_w $ has $ {\left\vert \mathcal{B}_w \right\vert} ^2 $ codewords of length $ 2n $ and constant weight $ n $. Furthermore, $ \mathit{d_H}(\mathcal{E}_w \geq \mathit{d_{min}} $ and $ \mathit{d_H}^{R}(\mathcal{E}_w \geq \mathit{d_{min}} $ ( this is because $ \mathcal{B}_w $ is a subset of the codewords in $ \mathcal{B} $.
Also,
$ \mathit{d_H}(a \bar{b}, d^{RC}c^R = \mathit{d_H}(a,d^{RC}) + \mathit{d_H}(\bar{b}, c^R) = \mathit{d_H}(a, d^{RC}) + \mathit{d_H}(c, b^{RC}) $.

Also, note that $ b $ and $ d $ both have weight $ \mathit{w} $. This implies that $ b^{RC} $ and $ d^{RC} $ have weight $ \mathit{n - w} $.
And due to the weight constraint on $ \mathit{w} $, we must have for all $ a, b, c, d \in \mathcal{B}_w $,
$ \mathit{d_H}(a \bar{b}, d^{RC}c^R) \geq 2 \lceil \mathit{d_{min}} /2 \rceil \geq \mathit{d_{min}} $.

Thus, the code $ \mathcal{O} $ has $ M(M - 1)/2 $ codewords of length $ 2n $. From this, we see that $ mathit{d_H}(\mathcal(O)) \geq \mathit{d_{min}} $ (because of the fact that the component codewords of $ \mathcal(O) $ are taken from $ \mathcal{B} $). Similarly, $ mathit{d_H^{RC}}(\mathcal(O)) \geq \mathit{d_{min}} $.

Therefore, the DNA code $  \mathcal{C} = \bigcup_{\mathit{w}=d_{min}}^{\mathit{w_{max}}} \mathcal{C}_w $
with $ \mathit{w_{max}} = (\mathit{n} - \lceil d_{min}/2 \rceil )/2 $, has $ \frac {1} {2} M(M - 1) \sum_{w=d_{min}}^{w_{max}} \left\vert \mathit{A_w}^2 \right\vert  $ codewords of length $ 2\mathit{n} $, and satisfies $ \mathit{d_H}(\mathcal{B}) \geq \mathit{d_{min}} $ and $ \mathit{d_H}^{RC}(\mathcal{B}) \geq \mathit{d_{min}}  $.

From the examples listed above, one can wonder what could be the future potential of DNA-based computers?

Despite its enormous potential, this method is highly unlikely to be implemented in home computers or even computers at offices, etc because of the sheer flexibility and speed as well as cost factors that favor silicon chip based devices used for the computers today.[1]

However, such a method could be used in situations where the only available method is this and requires the accuracy associated with the DNA hybridization mechanism; applications which require operations to be performed with a high degree of reliability.

See also

References.

  • [4]. J. Adamek, Foundations of Coding, John Wiley, New York, (1991).
  • [5]. N. Zierler, Linear recurring sequences, J. Soc. Indust. Appl. Math. 7 (i), 31-48, (1959).