Skip to main content

Blind recovery of k/n rate convolutional encoders in a noisy environment

Abstract

In order to enhance the reliability of digital transmissions, error correcting codes are used in every digital communication system. To meet the new constraints of data rate or reliability, new coding schemes are currently being developed. Therefore, digital communication systems are in perpetual evolution and it is becoming very difficult to remain compatible with all standards used. A cognitive radio system seems to provide an interesting solution to this problem: the conception of an intelligent receiver able to adapt itself to a specific transmission context. This article presents a new algorithm dedicated to the blind recognition of convolutional encoders in the general k/n rate case. After a brief recall of convolutional code and dual code properties, a new iterative method dedicated to the blind estimation of convolutional encoders in a noisy context is developed. Finally, case studies are presented to illustrate the performances of our blind identification method.

1 Introduction

In a digital communication system, the use of an error correcting code is mandatory. This error correcting code allows one to obtain good immunity against channel impairments. Nevertheless, the transmission rate is decreased due to the redundancy introduced by a correcting code. To enhance the correction capabilities and to reduce the impact of the amount of redundancy introduced, new correcting codes are always under development. This means that communication systems are in perpetual evolution. Indeed, it is becoming more and more difficult for users to follow all the changes to stay up-to-date and also to have an electronic communication device always compatible with every standard in use all around the world. In such contexts, cognitive radio systems provide an obvious solution to these problems. In fact, a cognitive radio receiver is an intelligent receiver able to adapt itself to a specific transmission context and to blindly estimate the transmitter parameters for self-reconfiguration purposes only with knowledge of the received data stream. As convolutional codes are among the most currently used error-correcting codes, it seemed to us worth gaining more insight into the blind recovery of such codes.

In this article, a complete method dedicated to the blind identification of parameters and generator matrices of convolutional encoders in a noisy environment is treated. In a noiseless environment, the first approach to identify a rate 1/n convolutional encoder was proposed in [1]. In [2, 3] this method was extended to the case of a rate k/n convolutional encoder. In [4], we developed a method for blind recovery of a rate k/n convolutional encoder in turbocode configuration. Among the available methods, few of them are dedicated to the blind identification of convolutional encoders in a noisy environment. An approach allowing one to estimate a dual code basis was proposed in [5], and then in [6] a comparison of this technique with the method proposed in [7] was given. In [8], an iterative method for the blind recognition of a rate (n-1)/n convolutional encoder was proposed in a noisy environment. This method allows the identification of parameters and generator matrix of a convolutional encoder. It relies on algebraic properties of convolutional codes [9, 10] and dual code [11], and is extended here to the case of rate k/n convolutional encoders.

This article is organized as follows. Section 2 presents some properties of convolutional encoders and dual codes. Then, an iterative method for the blind identification of convolutional encoders is described in Section 3. Finally, the performances of the method are discussed in Section 4. Some conclusions and prospects are drawn in Section 5.

2 Convolutional encoders and dual code

Prior to explain our blind identification method, let us recall the properties of convolutional encoders used in our method.

2.1 Principle and mathematical model

Let C be an (n, k, K) convolutional code, where n is the number of outputs, k is the number of inputs, K is the constraint length, and C be a dual code of C. Let us also denote by G(D) a polynomial generator matrix of rank k defined by:

G ( D ) = g 1 , 1 ( D ) g 1 , n ( D ) g k , 1 ( D ) g k , n ( D )
(1)

where gi,j(D), i = 1,..., k, j = 1,..., n, are generator polynomials and D represents the delay operator. Let μ i be the memory of the i th input:

μ i = max j = 1 , . . . , n deg g i , j ( D ) i = 1 , . . . , k
(2)

where deg is the degree of gi,j(D). The overall memory of the convolutional code, denoted μ, is

μ = max i = 1 , . . . , k μ i = K - 1
(3)

If the input sequence is denoted by m(D) and the output sequence by c(D), the encoding process can be described by

c ( D ) = m ( D ) . G ( D )
(4)

In practice, the encoder used is usually an optimal encoder. An encoder is optimal, [10], if it has the maximum possible free distance among all codes with the same parameters (n, k, and K). This is because the error correction capability of such optimal codes is much higher. Furthermore, their good algebraic properties [9, 10] can be judiciously exploited for blind identification.

To model the errors generated by the transmission system, let us consider the binary symmetric channel (BSC) with the error probability, P e , and denote by e(D) the error pattern and by y(D) the received sequence so that:

y ( D ) = c ( D ) + e ( D )
(5)

Let us also denote by e(i) the i th bit of e(D) so that: Pr(e(i) = 1) = P e and Pr(e(i) = 0) = 1 - P e . The errors are assumed to be independent.

In this article, the noise is modeled by a BSC. This BSC can be used to model an AWGN channel in the context of a hard decision decoding algorithm. Indeed, the BSC can be seen as an equivalent model to the set made of the combination of the modulator, the true channel model (AWGN by example) and the demodulator (Matched filter or Correlator + Decision Rule). Furthermore, in mobile communications, channels are subject to multipath fading, which leads, in the received bit stream, to burst errors. But, a convolutional encoder alone is not efficient in this case. Therefore, an interleaver is generally used to limit the effect of these burst errors. In this context, after the deinterleaving process, on the receiver side, the errors (so the equivalent channel including the deinterleaver) can also be modeled by a BSC.

2.2 The dual code of convolutional encoders

The dual code generator matrix of a convolutional encoder, termed a parity check matrix, can also be used to describe a convolutional code. This ((n - k) × n) polynomial matrix verifies the following property:

Theorem 1 Let G(D) be a generator matrix of C. If an ((n - k) × n) polynomial matrix, H(D), is a parity check matrix of C, then:

G ( D ) . H T ( D ) = 0
(6)

where .Tis the transpose operator.

Corollary 1 Let H(D) be a parity check matrix of C. The output sequence c(D) is a codeword sequence of C if and only if:

c ( D ) . H T ( D ) = 0
(7)

The parity check matrix is an ((n - k) × n) matrix such that:

H ( D ) = h 1 , 1 ( D ) h 1 , k ( D ) h 0 ( D ) h n - k , 1 ( D ) h n - k , k ( D ) h 0 ( D )
(8)

where h0(D) and hi,j(D) are the generator polynomials of H(D), i = 1,..., n - k and j = 1,..., k.

Let us denote by μ the memory of the dual code. According to the properties of a dual code and convolutional encoders [9, 11], this memory is defined by

μ = i = 1 k μ i
(9)

The polynomial, f ( D ) = i = 0 f ( i ) . D i , is a delayfree polynomial if f(0) = 1. According to [12], if the polynomial h0(D) is a delayfree polynomial, then the convolutional encoder is realizable. It follows that the generator polynomial, h0(D), is such that

h 0 ( D ) = 1 + h 0 ( 1 ) . D + + h 0 ( μ ) . D μ
(10)

Let us denote by H, the binary form of H(D) defined by

H = H μ H 1 H 0 H μ H 1 H 0 H μ H 1 H 0
(11)

where H i , i = 0,..., μ, are matrices of size ((n - k) × n) such that

H i = h 1 , 1 ( i ) h 1 , k ( i ) h 0 ( i ) h n - k , 1 ( i ) h n - k , k ( i ) h 0 ( i )
(12)

The parity check matrix (11) is composed of shifted versions of the same (n - k) vectors. These vectors of size n.(μ + 1) and denoted by h j (j = 1,..., n - k) are defined by

h j = H μ ( j ) H μ - 1 ( j ) H 1 ( j ) H 0 ( j )
(13)

where H i ( j ) , which correspond to the j th row of H i , is a row vector of size n such that

H i ( j ) = h j , 1 ( i ) h j , k ( i ) 0 j - 1 h 0 ( i ) 0 n - k - j
(14)

In (14), 0 l is a zero vector of size l.

In the case of a rate k/n convolutional encoder, each vector h j (13) is composed of (n - k - 1).(μ + 1) zeros. In this configuration, the system given in (7) is split into (n - k) systems:

c 1 ( D ) c k ( D ) c k + s ( D ) h s , 1 ( D ) h s , k ( D ) h 0 ( D ) = i = 1 k c i ( D ) . h s , i ( D ) + c k + s ( D ) . h 0 ( D ) = 0 ,
(15)

s = 1,...,(n - k). Thus, the (n - k) vectors (13), called parity checks, are such that

h s = H μ ( s ) H μ - 1 ( s ) H 0 ( s )
(16)

where H i ( s ) is a row vector of size (k + 1) defined by:

H i ( s ) = h s , 1 ( i ) h s , k ( i ) h 0 ( i )
(17)

Let us denote by S the size of these parity checks of the code (16) such that

S = ( k + 1 ) . μ + 1
(18)

It follows from (16) and (10) that the (n - k) parity checks, h s , are vectors of degree (S - 1).

3 Blind recovery of convolutional code

This section deals with the principle of the proposed blind identification method in the case where the intercepted sequence is corrupted. Only few methods are available for blind identification in a noisy environment: for example, an Euclidean algorithm-based approach was developed and applied to the case of a rate 1/2 convolutional encoder [13]. At nearly the same time, a probabilistic algorithm based on the Expectation Maximization (EM) algorithm was proposed in [14] to identify a rate 1/n convolutional encoder. Further to our earlier development of a method of blind recovery for a convolutional encoder of rate (n - 1)/n [8], it appeared to us worth extending it, here, to the case of a rate k/n convolutional encoder. Prior to describing the iterative method in use, which is based on algebraic properties of an optimal convolutional encoder [9, 10] and dual code [11], let us briefly recall the principle of our blind identification method when the intercepted sequence is corrupted.

3.1 Blind identification of a convolutional code: principle

This method allows one to identify the parameters (n, k, and K) of an encoder, the parity check matrix, and the generator matrix of an optimal encoder. Its principle is to reshape columnwise the intercepted data bit stream, y, under matrix form. This matrix, denoted R l , is computed for different values of l, where l is the number of columns. The number of rows in each matrix is equal to L. If the received sequence length is L', then the number of rows of R l is L= L l , where . stands for the integer part. This construction is illustrated in Figure 1.

Figure 1
figure 1

Example of matrix R l . An example of the received data bit stream reshape under matrix form.

If the received sequence is not corrupted (y = c e = 0), for α , we have shown in [8] that the rank in Galois Field, GF(2), of each matrix R l has two possible values:

  • If lα.n or l < n a

    rank ( R l ) = l
    (19)
  • If l = α.n and ln a

    r a n k ( R l ) = l . k n + μ < l
    (20)

where n a is a key-parameter which corresponds to the first matrix R l with a rank deficiency. Indeed, in [8], for a rate (n - 1)/n convolutional encoder, this parameter proved to be such that

n a = n . μ + 1
(21)

In this configuration, n a is equal to the size of the parity check (S). But, what is its value in general for a rate k/n convolutional encoder?

For a rate k/n convolutional encoder, we show in Appendix A that the size of the first matrix which exhibits a rank deficiency, n a , is equal to

n a = n . μ n - k + 1
(22)

From (22), it is obvious that the parameter, n a , is not equal to the size of the (n - k) parity check (16) of the code. In Appendix B, a discussion about the value of a rank deficiency of matrix R n a is proposed.

3.2 Blind identification of convolutional code: method

A prerequisite to the extension of the method applied in [8] to the case of a rate k/n convolutional encoder is the identification of the parameter, n. Then, a basis of dual code has to be built to further deduce the value of n a that corresponds to the size of the parity check with the smallest degree. Using both this parameter and (22), one can assume different values for k and μ Then, the (n - k) parity check (16) and a generator matrix of the code can be estimated.

To identify the number of outputs, n, let us evaluate the likely-dependent columns of R l . Then, the values of l at which R l matrices seem to be of degenerated rank are detected by converting each R l matrix into a lower triangular matrix (G l ) through use of the Gauss Jordan Elimination Through Pivoting adapted to GF(2):

G l = A l . R l . B l
(23)

where A l is a row-permutation matrix of size (L × L) and B l is a matrix of size (l × l) that describes the column combination. Let N l (i) be the number of 1 in the lower part of the i th column in the matrix, G l . In [15, 16], this number was used to estimate an optimal threshold (γ opt ), which allows us to decide whether the i th column of the matrix R l is dependent on the other columns. This optimal threshold is such that the sum of the missing probabilities is as small as possible. The numbers of detected dependent columns, denoted as Z(l), are such that

Z ( l ) = Card i 1 , . . . , l | N l ( i ) ( L - l ) . γ opt 2
(24)

where Card{x} is the cardinal of x. So, the gap between two non-zero cardinals, Z(l), is equal to the estimated codeword size ( n ^ ). Let I be a set of l-values where the cardinal is non-zero. From the matrix, B i ,iI, one can build a dual code basis. Let I be a ((L - i) × i) matrix composed of the last (L - i) rows of R i . If b j , j = 1,..., i, represents the j th column of B i , b j is considered as a linear form close to the dual code on condition that:

d R i 1 . b j ( L - i ) . γ o p t
(25)

where d(x) is the Hamming weight of x. Let us denote a set of all linear forms by D. Within the set of detected linear forms, the one with the smallest degree is taken and denoted, here, by ĥ, and its size by n ^ a . From (22), one can make different hypotheses about k and μ values. This algorithm is summed up in Algorithm 1.

For a rate (n - 1)/n convolutional encoder with ĥ as parity check, solving the system described in Property 1 (see Section 2) enables one to identify the generator matrix. One should, however, note that with a rate k/n convolutional code, a prerequisite to the identification of the generator matrix, G(D), is the identification of the (n - k) parity check, h j of size S (see (16) and (18)).

Algorithm 1: Estimation of k and μ

   Input: Value of n ^ and n ^ a

   Output: Value of k ^ and μ ^

   for k' = 1 to n ^ -1 do

       for Z = 1 to n ^ - k do

            μ ^ = μ ^ n ^ a . 1 - k n ^ - Z ;

            k ^ = k ^ k ;

       end

   end

It is done by building ( n ^ - k ^ ) row vectors denoted by x s so that

x s = y 1 ( t ) y k ( t ) y k + s ( t ) ,
(26)

s = 1,..., s=1,..., n ^ - k ^ . For each vector, x s , a matrix, R l s , is built as previously done for R l . Then, for each matrix R l s , a linear form of size S has to be estimated. This algorithm is summed up in Algorithm 2 where ĥ s refers to the identified n ^ - k ^ parity check.

Identification of the generator matrix from both these ( n ^ - k ^ ) parity checks and the whole set of the code parameters can be realized by solving the system described in Property 1.

In [15, 17], a similar approach, based on a rank calculation, is used to identify the size of an interleaver. In this article, an iterative process is proposed to increase the probability to estimate a good size of interleaver. The principle of this iterative process is to perform permutations on the R l matrix rows to obtain a new virtual realization of the received sequence. These permutations increase the probability to obtain non-erroneous pivots during the Gauss Elimination process (23). Our earlier identification of a convolutional encoder relied on a similar approach [8]. Indeed, at the output of our algorithm, either: (i) the true encoder, or an optimal encoder, is identified or (ii) no optimal code is identified. But in case (ii), the probability of detecting an optimal convolutional encoder is increased by a new iteration of the algorithm.

The average complexity of one iteration of the process dedicated to the blind identification of convolutional encoder is O l m a x 4 . Indeed, our blind identification method is divided into three steps: (i) identification of n, (ii) identification of a dual code basis, and (iii) identification of parity checks and a generator matrix. Each step consist of maximum (l max - 1) process of Gaussian eliminations on R l matrices of size (L × l)

Algorithm 2: Estimation of ( n ^ - k ^ ) parity check.

   Input: y, n ^ , k ^ and μ ^

   Output: ( n ^ - k ^ ) parity check

   for s = 1 to ( n ^ - k ^ ) do

         x s = y 1 ( t ) y k ^ ( t ) y k ^ + s ( t ) , ;

        for l = k ^ + 1 . μ ^ + 1 to l max do

             Build matrix R l s of size (L × l) with x s ;

              R l s T l = A l . R l s . B l

             for i = 1 to l do

                 if N l ( i ) L - l 2 . γ o p t then

                     if deg b i l = k ^ + 1 . μ ^ + 1 then

                          ĥ s = b i l ;

                     end

                 end

              end

           end

   end

where L = 2.l max . Thus, the average complexity is such that

O L . l = 2 l max l 2 = O 2 . l max . l max 3 = O l max 4
(27)

Thereby, the average complexity of the iterative process is

O n b iter . l max 4
(28)

where nbiter is the number of iterations realized.

To identify all parameters of an encoder, it is necessary to obtain two consecutive rank deficiency matrix. So, the minimum value of lmax is

l max = n a + n = n . μ n - k + 1 + n
(29)

Furthermore, in the literature, the parameters of convolutional encoders used take typically quite very small values. Indeed, the maximum parameters are such that

n max = 5 , k max = 4 , K max = 10
(30)

A minimum value of lmax is given in Table 1 for three optimal encoders used in the following section dedicated to the analysis and performances study of our blind identification method.

Table 1 Different values of lmax (the minimum value of lmax is given for three optimal encoders)

4 Analysis and performances

In order to gain more insight into the performances of our blind identification technique, let us consider three convolutional encoders, C(3,1, 4), C(3, 2, 3), and C(2, 1, 7).

Let R l be a matrix built from 20, 000 received bits with l = 2, ..., 100 and L = 200. It is very important to take into account the number of data to prove that our algorithm is well adapted for implementation in a realistic context. The amount of 20,000 bits is quite low with regards compared to standards. For example, in the case of mobile communications delivered by the UMTS at a data rate up to 2 Mbps, only 10 ms are needed to receive 20, 000 bits. Furthermore, the rates reached by standards in the future will be higher.

For each simulation, 1000 Monte Carlo were run, and focus was on

  • the impact of the number of iterations upon the probability of detection;

  • the global performances in terms of probability of detection.

In this article, the detection means complete identification of the encoders (parameters and generator matrix).

4.1 The detection gain produced by the iterative process

The number of iterations to be made is a compromise between the detection performances and the processing delay introduced in the reception chain (see [8]). To evaluate this number of iterations, let P det (i) be the probability of detecting the true encoder at the i th iteration.

The probability of detecting the true encoder, Pdet, is called probability of detection.

  • C(3, 2, 3) convolutional encoder:

Figure 2 shows the probability of detecting the true encoder (Pdet) compared with P e for 1, 10, and 50 iterations. It shows that, for the C(3, 2, 3) convolutional encoder, 10 iterations of the algorithm result in the best performances: indeed, there is no advantage in performing 50 iterations rather than 10. On the other hand, the gain between 1 and 10 iterations is huge.

Figure 2
figure 2

C (3,2,3): Probability of detection compared with P e . For the C(3,2,3) encoder, the probability of detecting the true encoder is depicted compared with the channel error probability for 1, 10, and 50 iterations.

  • C(3,1,4) convolutional encoder:

Figure 3 illustrates the evolution of Pdet compared with P e for 1, 10, and 50 iterations in the case of C(3,1, 4) convolutional encoder. It shows that the gain between the 1st and the 50th iterations is nearly nil.

Figure 3
figure 3

C (3,1,4): Probability of detection compared with P e . For the C(3,1,4) encoder, the probability of detecting the true encoder is depicted compared with the channel error probability for 1, 10, and 50 iterations.

For a rate k/n convolutional code where kn - 1, the algorithm presented in Figure 2 requires several iterations to estimate the (n - k) parity checks (16). Consequently, for such codes (kn - 1) there is no need to realize this iteration process. Indeed, the gain provided by our iterative process is not significant. But, for a rate (n - 1)/n convolutional encoder, it is clear that the algorithm performances are enhanced by iterations. Moreover, it is important to note that the detection of a convolutional code depends on both the parameters of the code, the channel error probability, and the correction capacity of the code. Thus, the number of iterations needed to get the best performance is code dependent. For such a code, it would be worth assessing the impact of the required number of data. In order to achieve this, for the C(2,1, 7) convolutional encoders, a comparison of the detection gain produced by the iterative process for several values of L is proposed.

  • C(2,1,7) convolutional encoder:

Figure 4 depicts Pdet compared with P e , for 1, 5, and 50 iterations and for L = 200. For 1, 10, 40, and 50 iterations, Figure 5 illustrates the evolution of Pdet compared with P e for L = 500. It shows that, for L = 200, 5 iterations permit us to identify the true encoder, whereas, for L = 500, the identification of the true encoder requires 40 iterations. For L = 200, after 5 iterations, Pdet is close to 1 for P e ≤ 0.02, but after 40 iterations and L = 500, Pdet is close to 1 for P e ≤ 0.03. It is clear that the number of received bits is an important parameter of our method. Indeed, by increasing the size of matrices R l , the probability to obtain non-erroneous pivots increases during the iterative process. Thus, it is possible to realize more iterations of our algorithm to improve detection performances. But, for implementation in a realistic context, the required number of data has to be taken into account. In the last section, we will show that the algorithm performances are very good when L = 200.

Figure 4
figure 4

C (2,1,7): Probability of detection compared with P e for L = 200. For the C(2,1,7) encoder and L = 200, the probability of detecting the true encoder is depicted compared with the channel error probability for 1, 5, and 50 iterations.

Figure 5
figure 5

C (2,1,7): Probability of detection compared with P e for L = 500. For the C(2,1,7) encoder and L = 500, the probability of detecting the true encoder is depicted compared with the channel error probability for 1, 10, 40, and 50 iterations.

4.2 Probability of detection

To analyze the method performances, three probabilities were defined as follows:

  1. 1.

    probability of detection (Pdet) is the probability of identifying the true encoder;

  2. 2.

    probability of false-alarm (Pfa) is the probability of identifying an optimal encoder but not the true one;

  3. 3.

    probability of miss (Pm) is the probability of identifying no optimal encoder.

In order to assess the relevance of our results through a comparison of the different probabilities to the code correction capability, let us denote by BER r the theoretical residual bit error rate obtained after decoding of the corrupted data stream with a hard decision [12]. Here, to be acceptable, BER r must be close to 10-5.

Figures 6, 7, and 8 show the different probabilities compared with P e after 10 iterations and the limit of the 10-5 acceptable BER r for C(3, 2, 3), C(3, 1, 4), and C(2, 1, 7) convolutional encoders, respectively. One should note that the probability of identifying the true encoder is close to 1 for any P e with a post-decoding BER r less than 10-5. Indeed, the algorithm performances are excellent: Pdet is close to 1 when P e corresponds to either BER r < 2 × 10-4 for C(3,2,3) convolutional encoder or BER r < 0.67 × 10-4 for the C(3,1,4) encoder.

Figure 6
figure 6

C (3,2,3): Probability of detection, probability of false-alarm, and probability of miss compared with P e . For the C(3, 2, 3), the probability of detection, the probability of false-alarm, and the probability of miss are depicted compared with he channel error probability.

Figure 7
figure 7

C (3,1,4): Probability of detection, probability of false-alarm, and probability of miss compared with P e . For the C(3, 1, 4), the probability of detection, the probability of false-alarm, and the probability of miss are depicted compared with he channel error probability.

Figure 8
figure 8

C (2,1,7): Probability of detection, probability of false-alarm and, probability of miss compared with P e . For the C(2, 1, 7), the probability of detection, the probability of false-alarm, and the probability of miss are depicted compared with he channel error probability.

5 Conclusion

This article dealt with the development of a new algorithm dedicated to the reconstruction of convolutional code from received noisy data streams. The iterative method is based on algebraic properties of both optimal convolutional encoders and their dual code. This algorithm allows the identification of parameters and generator matrix of a rate k/n convolutional encoder. The performances were analyzed and proved to be very good. Indeed, the probability to detect the true encoder proved to be close to 1 for a channel error probability that generates a post-decoding BER r that is less than 10-5. Moreover, this algorithm requires a very small amount of received bit stream.

In most digital communication systems, a simple technique, called puncturing, is used to increase the code rate. The blind identification of the punctured code is divided into two part: (i) identification of the equivalent encoder and (ii) identification of the mother code and puncturing pattern. Our method, dedicated to the blind identification of k/n convolutional encoders, also allows the blind identification of the equivalent encoder of the punctured code. Thus, our future study will be to identify the mother code and the puncturing pattern only from the knowledge of this equivalent encoder.

A The key-parameter n a

According to (20), the rank of the matrix, Rα.n, is:

rank R α . n = α . n . k n + μ < α . n
(31)

Let us seek n a , when n a = α.n, which corresponds to the first matrix, R n a , with a rank deficiency. This corresponds to seeking the minimum value of α.

α . n 1 - k n > μ
(32)
α . n > n n - k . μ
(33)
α > μ n - k
(34)

So, the minimum value of α, denoted α min , is such that

α min = μ n - k + 1
(35)

According to (35), the key-parameter n a is such that

n a = n . α min = n . μ n - k + 1
(36)

B The rank deficiency of R n a

According to (36), the rank of R n a is such that

rank R n a = k . μ n - k + 1 + μ
(37)

Therefore, the rank deficiency of R n a , denoted Z ( n a ) = n a - rank R n a , is

Z ( n a ) = ( n - k ) . μ n - k + 1 - μ = ( n - k ) . μ n - k - μ + ( n - k )
(38)

The modulo operator is equivalent to

( a mod ( b ) ) = a - a b . b
(39)

and thus:

Z ( n a ) = - μ mod ( n - k ) + ( n - k )
(40)

The modulo operator is such that

0 ( a mod ( b ) ) < b
(41)

Consequently, the value of (μ- mod (n - k)) is

0 μ mod ( n - k ) < ( n - k )
(42)
- ( n - k ) < - μ mod ( n - k ) 0
(43)
0 < ( n - k ) - μ mod ( n - k ) ( n - k )
(44)

So, Z(n a ) is such that

0 < Z ( n a ) ( n - k )
(45)

where Z(n a ) . Therefore, the rank deficiency of the matrix, R n a , is such that

1 Z ( n a ) ( n - k )
(46)

References

  1. Rice B: Determining the parameters of a rate 1/ n convolutional encoder over gf(q). In Proceedings of the 3rd International Conference on Finite Fields and Applications. Glasgow; 1995.

    Google Scholar 

  2. Filiol E: Reconstruction of convolutional encoders over GF(p). In Proceedings of the 6th IMA Conference on Cryptography and Coding. Volume 1355. Springer Verlag; 1997:100-110.

    Google Scholar 

  3. Barbier J: Reconstruction of turbo-code encoders. In Proc SPIE Security and Defense Space Communication Technologies Symposium. Volume 5819. Orlando, FL, USA; 2005:463-473.

    Google Scholar 

  4. Marazin M, Gautier R, Burel G: Blind recovery of the second convolutional encoder of a turbo-code when its systematic outputs are punctured. MTA Rev 2009, XIX(2):213-232.

    Google Scholar 

  5. Barbier J, Sicot G, Houcke S: Algebraic approach for the reconstruction of linear and convolutional error correcting codes. Int J Appl Math Comput Sci 2006, 2(3):113-118.

    Google Scholar 

  6. Côte M, Sendrier N: Reconstruction of convolutional codes from noisy observation, in. In Proceedings of the IEEE International Symposium on Information Theory ISIT 09. Seoul, Korea; 2009:546-550.

    Google Scholar 

  7. Valembois A: Detection and recognition of a binary linear code. Discr Appl Math 2001, 111(1-2):199-218. 10.1016/S0166-218X(00)00353-X

    Article  MathSciNet  MATH  Google Scholar 

  8. Marazin M, Gautier R, Burel G: Dual code method for blind identification of convolutional encoder for cognitive radio receiver design. In Proceedings of the 5th IEEE Broadband Wireless Access Workshop, IEEE GLOBECOM 2009. Honolulu, Hawaii, USA; 2009.

    Google Scholar 

  9. Forney GD: Convolutional codes I: algebraic structure. IEEE Trans Inf Theory 1970, 16(6):720-738. 10.1109/TIT.1970.1054541

    Article  MathSciNet  MATH  Google Scholar 

  10. McEliece R: The algebraic theory of convolutional codes. In Handbook of Coding Theory. Volume 2. Elsevier Science; 1998:1065-1138.

    Google Scholar 

  11. Forney GD: Structural analysis of convolutional codes via dual codes. IEEE Trans Inf Theory 1973, 19(4):512-518. 10.1109/TIT.1973.1055030

    Article  MathSciNet  MATH  Google Scholar 

  12. Johannesson R, Zigangirov KS: Fundamentals of Convolutional Coding. IEE Press; 1999. IEEE Series on Digital and Mobile Communication

    Book  MATH  Google Scholar 

  13. Wang F, Huang Z, Zhou Y: A method for blind recognition of convolution code based on euclidean algorithm, in. Proceedings of the International Conference on Wireless Communications, Networking and Mobile Computing 2007, 1414-1417.

    Google Scholar 

  14. Dingel J, Hagenauer J: Parameter estimation of a convolutional encoder from noisy observations, in. In Proceedings of the IEEE International Symposium on Information Theory, ISIT 07. Nice, France; 2007:1776-1780.

    Google Scholar 

  15. Sicot G, Houcke S: Blind detection of interleaver parameters. Proceedings of the ICASSP 2005, 829-832.

    Google Scholar 

  16. Sicot G, Houcke S: Theoretical study of the performance of a blind interleaver estimator, in. In Proceedings of the ISIVC. Hammamet, Tunisia; 2006.

    Google Scholar 

  17. Sicot G, Houcke S, Barbier J: Blind detection of interleaver parameters. Elsevier Signal Process 2009, 89(4):450-462.

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This study was supported by the Brittany Region (France).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roland Gautier.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Marazin, M., Gautier, R. & Burel, G. Blind recovery of k/n rate convolutional encoders in a noisy environment. J Wireless Com Network 2011, 168 (2011). https://doi.org/10.1186/1687-1499-2011-168

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2011-168

Keywords