CA2221295C - Parallel concatenated tail-biting convolutional code and decoder therefor - Google Patents
Parallel concatenated tail-biting convolutional code and decoder therefor Download PDFInfo
- Publication number
- CA2221295C CA2221295C CA002221295A CA2221295A CA2221295C CA 2221295 C CA2221295 C CA 2221295C CA 002221295 A CA002221295 A CA 002221295A CA 2221295 A CA2221295 A CA 2221295A CA 2221295 C CA2221295 C CA 2221295C
- Authority
- CA
- Canada
- Prior art keywords
- component
- decoder
- soft
- decision
- composite
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2957—Turbo codes and decoding
- H03M13/2996—Tail biting
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2957—Turbo codes and decoding
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2957—Turbo codes and decoding
- H03M13/2978—Particular arrangement of the component decoders
- H03M13/2981—Particular arrangement of the component decoders using as many component decoders as component codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/3723—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 using means or methods for the initialisation of the decoder
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/39—Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
- H03M13/3905—Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0064—Concatenated codes
- H04L1/0066—Parallel concatenated codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0067—Rate matching
- H04L1/0068—Rate matching by puncturing
Abstract
A parallel concatenated convolutional coding scheme utilizes tail-biting nonrecursive systematic convolutional codes. The associated decoder iteratively utilizes circular maximum a posteriors decoding to produ ce hard and soft decision outputs. This encoding/decoding system results in improved error-correction performance for short messages.< /SDOAB>
Description
2 PCTlUS97/06129 PARALLEL CONCATENATED TAIL-BITING
GONVOLUTIONAL CODE AND DECODER THEREFOR
Field of the Invention The present invention relates generally to error-correction coding for the communication of short messages over poor channels and, more particularly, to a parallel concatenated convolutional coding technique and a decoder therefor.
Background of the Invention One form of parallel concatenated coding, referred to as either 20 parallel concatenated convolutional coding (PCCC) or "turbo coding", has been the subject of recent coding research due to impressive demonstrated coding gains when applied to blocks of 10,000 or more bits. (See C. Berrou, A.
Glavieux and P. Thitimajshima, "Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes," Proceedings of the IEEE International 3.5 Conference on Communications, 1993, pg. 1064-1070; J.D. Andersen, "The TURBO Coding Scheme," Report IT-I46 ISSN 0105-854, Institute of Telecommunication, Technical University of Denmark, December 1994; and P.
Robertson, "Illuminating the Structure of Code and Decoder of Parallel Concatenated Recursive Systematic (Turbo) Codes," 1994 IEEE Globecom - 20 Conference, pp. 1298-1303.) However, it has been shown that the performance of a turbo code degrades substantially as the length of the encoded data block decreases.
This effect is due to the strong dependence of its component recursive systematic convolutional codes' weight structures on block length. A second 25 issue is the proper termination of message blocks applied to a turbo encoder.
As described by O. Joersson and H. Meyr in "Terminating the Trellis of Turbo-Codes," IEE Electronics Letters, vol. 30, no. I 6, August 4, 1994, pp. 1285-1286, the interleaving used in turbo encoders may make it impossible to terminate both the interleaved and non-interleaved encoder input sequences with 3 0 a single set of tail bits. While it is possible to use a second tail sequence embedded into the message structure such that the encoder operating on the interleaved data sequence is property terminated, this doubles the overhead associated with encoder termination and reduces the effective code rate. An alternative is not to terminate one of the encoder sequences, but this degrades performance of the encoder/decoder system, particularly when applied to short messages. In '"Fetminating the Trcliis of Turbo-Codes in the Same State," IEE
Electronics Letters, 1995, vol. 3 l, no. I, .Ianuary 5, pp. 22-23, A.S.
Barbulescu and S.S. Pietrobon reported a method that places restrictions on the interleaves design in order to terminate two component recursive systematic convolutional (RSC) encoders with a single termination bit sequence. Their performance results show some degradation compared to performance obtained by terminating both encoders when an optimized interleaves is used. In addition, published bit-error rate (BER) versus energy-per-bit-to-noise-power-spectral-density ratio (Eb/No) data exhibie a flattening in BER over a range of EblNo values when RSC's are utilized in the turbo encoder.
Accordingly, it is desirable to provide an improved parallel concatenated coding technique for short data blocks.
Summary of th,~, Invention In accordance with the present invention, a parallel concatenated 2 0 convolutionai coding scheme utilizes tail-biting nonrecursive systematic convolutional (NSC) cods. The associated decoder iteratively utilizes circular maximum a posreriori (MAP) decoding to produce hard and soft decision outputs. Use of tail-biting codes solves the problem of termination of the input data sequences in turbo coding, thereby avoiding associated decoder 2 5 performance degradation for short messages. While NSC codes are generally weaker than rxursive systematic convolutional (RSC) codes having the same _ _ memory asymptotically as the data block length increases, the free distance of a NSC code is less sensitive to data block length. Hence, parallel concatenated coding with NSC codes will perform better than with RSC codes having the 3 0 same memory for messages that are shorter than some threshold data block brief Descn~ption of the I?rawines The features and advantages of the prexnt invention will become apparent from the following detailed description of the invention when ttad with the accompanying drawings in which:
FiG. 1 is a simplified block diagram illustrating a parallel concatenated encoder, F1G. 2 is a simplified block diagram illustrating a decoder for parallel concatenated codes;
FIG. 3 is a simpliFed block diagram illustrating a tail-biting, nonrccursive systematic convolutiona! encoder for use in the coding scheme according to the present invention;
FIG. 4 is a simplified block diagram illustrating a circular MAP
decoder useful as a component decoder in a decoder for a parallel concatenated convolutional coding scheme according to the present invention; and FIG. 5 is a simplified block diagram illusuating an alternative embodiment of a circular MAP decoder useful as a component decoder for a parallel concatenated convolutional coding scheme according to the prexnt invention Detailed Descritztion of the Invention 2 0 FTG. 1 is a general block diagram of encoder signal processing 10 for parallel concatenated coding schemes. It comprixs a plurality N of component encoders 12 which operate on blocks of data bits from a source.
_ _ The data blocks are permuted by interleaving algorithms via interleavers 14. As shown, there are N-1 interleavers for N encoders I2. Finally, the component 2 5 encoder outputs are combined into a single composite codeword by a composite codeword formatter 16. The composite codeword formatter is choxn to suit the characteristics of the channel, and it may be followed by a frame formatter chosen to suit the channel and communication system's channel access technique. The frame formatter may also insert other necessary overhead such as control bits and synchronization symbols.
A significant code rate advantage can be obtained in parallel concatenated coding if the componern codes are systematic codes. The codewords (output) generated by a systematic encoder include the original data bits provided as inputs to the encoder and additional parity bits. (The redundancy introduced by the parity bits is what yields a code's error correction capability.) Therefore, when systematic encoders are used in the parallel concatenated encoder shown in FIG. 1, the calewords produced by all 1 o component encoders I2 contain the input data bits. If formatter I6 forms a data packet or composite codeword comprising only the parity bits generated by each component encoder 12 and the block of information bits to be coded, a substantial improvement in the rate of the composite parallel concatenated code is realized by eliminating repetition of the information bits in the transmitted composite codeword. For example, if component encoder 1 and component encoder 2 of a parallel concatenated convolutionai code (PCCC) encoder comprising two component codes are both rate In codes, the composite parallel concatenated code rate is increased from 1/4 for nonsystematic component codes to I!3 when systematic component codes are used.
Parallel concatenated coding schemes which utilize recursive systematic convolutional (RSC) codes have been the recent topic of much research. These parallel concatenated convolutional codes (PCCCs) are also - commonly known in the literature as "turbo" codes. As noted hereinabove, it has been demonstrated that these PCCCs can achieve impressive performance in terms of bit error rate (BER) versus the energy per bit to noise power spectral density ratio (Eb/N~ for the case of relatively Large messages, i.e., ten thousand or more bits. However, it has also been shown that the coding gain obtained with turbo codes degrades significantly with decreasing data block size because the recursive systematic convolutional component codes' strengths are quite sensitive to data block length. On the other hand, the performance of a nonrecursive systematic tail-biting convolucional code is independent of data block length for most practical purposes; the obtainable performance degrades only if the block of data bits encoded is less than a minimum size that is determined by the NSC's decision depth properties.
FIG. 2 illustrates a general decoder 20 for parallel concatenated ~ codes in block diagram form. Decoder 2() comprises the following: a composite-codeword-to-component-codeword converter 22 which converts the composite codeword received from a channel to individual received codewords for each component decoder 24; N component decoders 24 corresponding to the N component encoders of FIG. 1; the same type (or same) interleavers 14 that are used in the parallel concatenated encoder (FIG. 1); and first and second deinterleavers 28 and 29, respectively, that each have a sequence reordering characteristic that is equivalent to a series concatenation of N-1 deinterleavers 30 corresponding to the N-1 interleavers used for encoding. The required ordering of these deinterleavers is shown in FIG. 2 and is the reverse of the ordering of the interleavers. The outputs of component decoders 24 are some type of soft-25 decision information on the estimated value of each data hit in the received codeword. Far example, the outputs of the component decoders may be a first function of the probabilities that the decoded bits are 0 or 1 conditioned on the received sequence of symbols from the channel. One example of slick a first function removes the influence of the conditional probability P(cl ~ = lJ f Y
l ) from a component decoder's soft-decision output that is inputted to a next sequential component decoder after an appropriate permutation, where P(chJ =
0 I Yi j is the probability that the j~ information hit at time t is () conditioned on the ja' (systematic) bit of the received channel output symbol Yt.
Alternatively, the soft-decision information outputted by component decoders 24 may be a function of the likelihood ratio _ _ ~ P(dt = I I YI J 1 - P(~ ~ = 0 I YI J
_ _ P(cl1 = 0 ! Yll'j P(d~ = l) f YI ) or as a function of the log-likelikelihood ratio log j ~ ( c tJ ) j .
SUBSTfTUTE ~SHG~T (F~UkB 26) _H_ As shown, the Nth component decoder has a second output, i.e., a second function of the conditional probabilities for the decoded bit values or likelihood ratios above. An example of this second function is the product of P(c r~ _ () I
YI J and the a priori probability that ~~ = U received from the previous component decoder.
The decoder for parallel concatenated codes operates iteratively in the following way. The first component decoder (decoder 1) calculates a set of soft-decision values for the sequence of information bits encoded by the first component encoder based on the received codeword and any o priori information about the transmitted information bits. In the first iteration, if there is no a priori information on the source statistics, it is assumed that the bits are equally likely to be () or 1 (i.e., P(bit = OJ = P(hit = 1 J = 1~Z ). The soft-decision values calculated by decoder 1 are then interleaved using the same type (ar same) interleaver that was used in the encoder to permute the block of data bits for the second encoder. These permuted soft-decision values and the corresponding received codeword comprise the inputs to the next component decoder (decoder 2). The permuted soft-decision values received from the preceding component decoder and interleaver are utilized by the next component decoder as a priori information about the data bit's to be dec:oded_ 2 0 The component decoders operate sequentially in this manner until the Nth decoder has calculated a set of soft-decision output's for the block of data bits that was encoded by the encoder. The next step is deinterleaving soft-decision values from the Nth decoder as described hereinabove. The first decoder then operates on the received codeword again using the new soft-decision values from the N~ decoder as its a priori information. Decoder eperatien proceeds in this manner for the desired number of iterations. At the conclusion <>f the final iteration, the sequence of values which are a second function ref the soft-decision outputs calculated by the Nth decoder is deinterleaved in order to return the data to the order in which it was received by the PCCC encoder. The 3 0 number of iterations can he a predetermined number, or it c;an he determined dynamically by detecting decoder convergence.
SUBSTITUTE ut-~E~T (f MULE 26~
The decoder provides soft-decision information which is a function of the probability P(d~t = 0 I Y 1J ; that is, the conditional probability that the j~ data hit in a k-hit symbol input to the encoder at time r is t>
given that the set of channel outputs YIL = (yl,...,y L j is received. In addition, the decoder may provide hard-decision information as a function of its sott-decision output through a decision device which implements a decision rule, such as:
~.1 = O
t P(d~t = 0 1 Ylj c ~l~=1 r That is, if P(d J = 0 I YL J > ~, then ~l ~ = 0; if P(dl = 0 I YL j < 1 , then il a r 1 2 c r r 2 r 20 = 1; otherwise, randomly assign clot the value t.) or 1.
Typical turbo decoders utilize either maximum o hnster-icsri (MAP) decoders, such as described by L.R. Bahl, J. Cocke, F. Jelinek and J.
Raviv in "Optimal Decoding of Linear Codes for Minimizing Symbol error Rate," IEEE Trccn.saction.s of lyfi~rrnrrtinn. Theory, March 1 )74, pp. 2K4-287, r~r soft output Viterbi algorithm (SOVA) decoders, as described by J. Hagenauer and P. Hoeher in "A Viterhi Algorithm with Soft-Decision Outputs and its Applications, 1989 IEEE Glubecnm. Conference, pp. 16$0-if R6. A MAP
decoder produces the probability that a decoded bit value is () or 1. t:)n the other hand, a SOVA decoder typically calculates the likelihood ratio P( decoded bit i,s I j P( decoded bit is 0 j for each decoded hit. It is apparent that this likelihood ratio can he obtained from P(decoded bit is OJ and vice versa using P(decorled bit a.s ()j = I -P(decoded bit is 1 j. Some computational advantage Ilas been found when either MAP or SOVA decoders work with the logarithm of likelihood ratios, 2 5 that is, h~ ~ P( decoded bit is 1 j ~-P( decoded bit i.s 0 j SUBSTITUTE SficET (EtULE 26.~
WO 97/40582 PCTlUS97/06129 -$-it has been demonstrated that the coding gain (error correction capability) obtained with turbo nodes degrades significantly with decreasing data block size. Several authors have attributed this behavior primarily to the properties of RSC codes. it has been shown that the distance property of a RSC code increases with increasing data block length. Conversely, the minimum distance of a RSC code decreases with decreasing data block length.
A second problem is the difficulty in terminating all RSC codes comprising a turbo coding scheme due to interleaving. Disadvantageously, the adverse effects resulting from a lack of sequence termination or the imposition of ttstrictions on interleaver design are significant and become even molt so with decreasing data block length.
In accordance with the present invention, the component codes in a paratlci concatenated convolurional coding scheme comprise tail-biting nonrecursive systematic convolutional codes. The use of such tail-biting codes solves the problem of tertninadon of the input data sequences in turbo coding, thereby avoiding decoder performance degradation for short messages.
Although NSC codes are generally weaker than RSC codes having the same memory, the free distance of a NSC code is less sensitive to data block length.
Hence, parallel concatenated coding with NSC codes will perform ixttcr than 2 0 with RSC codes having the same memory for messages that arc shorter than a predetermined threshold data block size. The performance cross-over point is a function of desired decoded bit error rate, code rate, and code memory.
FIG. 3 illustrates an example of rate = Il2, memory = ra tail-biting nonrccursive systematic convolutional encoder for use in the parallel 2 5 concatenated convolutional coding (PCCC) scheme of the prtsent invention.
For purposes of description, an (n, k, m) encoder denotes and encoder wherein the input symbols comprise k bits, the output symbols comprise n bits, and rrs = encoder memory in k-bit symbols. For purposes of illustration, FIG. 3 is drawn for binary input symbols, i.e., k = 1. However, the prtsent invention is 3 o applicable to any values of k, n ,and m.
Initially, a switch 50 is in the down position, and L input bits are shifted into a shift register 52, k at a time (one input symbol at a time for this i RD-24,994 example). After loading the Lth bit into the encoder, the switch moves to the up position and encoding begins with the shift of the first bit from a second shift register 54 into the nonrecursive systematic encoder; the state of the encoder at this time is ~bL, bL_I, . . . , bL-(km-1)~~ In this example, the encoder output comprises the current input bit and a parity bit formed in block 56 (shown as modulo 2 addition for this example) as a function of the encoder state and the current input symbol. Encoding ends when the Lth bit is encoded.
Another aspect of the present invention, the associated decoder for the hereinabove described parallel concatenated encoder, comprises a circular MAP decoder as described by the present inventors in commonly assigned, copending CA Patent Application No. 2,221,137. In particular, CA
Patent Application No. 2,221,137 describes a circular MAP decoder useful for decoding tail-biting convolutional codes. The circular MAP decoder can deliver both an estimate of the encoded data block and reliability information to a data sink, e.g., a speech synthesis signal processor for use in transmission error concealment or protocol processor for packet data as a measure of block error probability for use in repeat request decisions.
In particular, as described in CA Patent Application No. 2,221,137, a circular MAP decoder for error-correcting trellis codes that employ tail biting produces soft-decision outputs. The circular MAP decoder provides an estimate of the probabilities of the states in the first stage of the trellis, which .probabilities replace the a priori knowledge of the starting state in a conventional MAP decoder. The circular MAP decoder provides the initial state probability distribution in either of two ways. The first involves a solution to an eigenvalue problem for which the resulting eigenvector is the desired initial state probability distribution; with knowledge of the starting state, the circular MAP
decoder performs the rest of the decoding according to the conventional MAP
decoding algorithm. The second is based on a recursion for which the iterations converge to a starting state distribution. After sufficient iterations, a state on the circular sequence of states is known with high probability, and the circular MAP decoder performs the rest of the decoding according to the conventional MAP decoding algorithm.
WO 97!40582 PCT/LTS97/06129 The objective of the conventional MAP decoding algorithm is to find the conditional probabilities:
P(stctte m. at. time t 1 receive channel output..s yl,...,y~ .
The term L in this expression represents the length of the data block in units of the number of encoder symbols. (The encoder for an (n, k) code operates on k-bit input symbols to generate n-bit output symbols.) The term yt is the channel output (symbol) at time t.
The MAP decoding algorithm actually first fords the probabilities:
~,t(m) - P(St = m; YI j ; (1) that is, the joint probability that the encoder state at time t, St , is nz and the set of channel outputs YIL = (yl,...,yLj is received. These are the desired probabilities multiplied by a constant (P(YILJ, the probability of receiving the set of channel outputs (Yr,...,y~> .
Now define the elements of a matrix Tr by Tt(i,J) = P(.state. j at. time. t; yt I .state i at. time t-1 . j The matrix Tt is calculated as a function of the channel transition prc>bahility R(Yt, X), the probability pt(rnfm') that the encoder makes a transition from state rn'to m at time t, and the probability qt(Xim;m) that the encoder's output symbol is X given that the previous encoder state is m.' and the present encoder - state is m. In particular, each element of T is calculated by summing over all possible encoder outputs X as follows:
(2) Y(m:m) _ ~Pt(rnftn') qt(Xlm.',nz) R(Yr, X) -X
SUBSTITUTE SHEET (RULE 26).
_iI_ The MAP decoder calculates L of these matrices, one for each trellis stage.
They are formed from the received channel output symbols and the nature of the trellis branches for a given code.
Next define the M joint probability elements of a row vector a t by (xt(j) = P(state j at time t.; yl,...,ytj (3) and the M conditional probability elements of a column vector ~it by ~t(1~ = P(yt+hwyL ~ ~state j at tune t.) (4) for j = 0,1 >..., (M 1 ) where M is the number of encoder states. (Note that matrices and vectors are denoted herein by using boldface type.) The steps of the MAP decoding algorithm are as follows:
(i) Calculate al, ..., aL by the forward recursion:
at = at_I ht , t=I,...,L . (5) (ii) Calculate ~l, ..., ~L-I by the backward recursion:
- ~t = rt+1 ~t+1 ~ t = L-1,...,1 . (6) 2 0 (iii) Calculate the elements of ~, t by:
at(t) = txt(i) ~3t(i) , all i, t.=I,...,L . (7) (iv) Find related quantities as needed. For example, let A i be the set of states - St = (S t, S t, ..., S~mJ such that the j0' element of St, S~t, is equal to zero. For a conventional non-recursive trellis code, S t = dJt, the j0' data hit at time r..
Therefore, the decoder's soft-decision output is SUBSTITUTE S~IcET (RULE 2b~
P(cl t = OiYlJ P YLl ~ a,t(na) ( 1 SreA~~ ' where P(Yl J = ~~,L(m) and m m is the index that corresponds to a state St.
The decoder's hard-decision or decoded hit output is obtained by applying P(~ = OI Y IJ to the following decision rule:
~l ~ -- o r P(clt=OI Ylf cl~-I
t That is, if P(c~t = 0 I Y ~) > 2, then il t = 0; if P(cht = 0 I Y 1 J < 2 , then il 1-1; otherwise, randomly assign car the value () or 1.
As another example of a related quantity for step (iv) hereinahove, the matt-ix of probabilities car comprises elements defined as follows:
ar (1> .1) = P{Sc-1 = i; Sc =.1; Y 1 } = ar_I (i) Yr (i> .t) /fir U) These probabilities are useful when it is desired to determine the a pv.sterio'-i probability of the encoder output bits.
In the standard application of the MAP decoding algorithm, the 25 forward recursion is initialized by the vector a/~ _ (1,0,._..0), and the backward - recursion is initialized by ~L = (1,0,...0)T . These initial conditions are based on assumptions that the encoder's initial state S~ = 0 and its ending state SL =
lJ.
One embodiment of the circular MAP decoder determines the initial state probability distribution by solving an eigenvalue problem as 2 0 follows. Let ar, /3r, Tt and ilr he as before, hut take the initial ~x~~
and ~iL as follows:
SUBSTITUTE SE-II=ET (RULE 26) Set ~L to the column vector (111...1)T.
- Let a~~ be an unknown (vector) variable.
Then, (i) Calculate Tt for t = 1, 2, ... L according to equation (2).
(ii) Find the largest eigenvalue of the matrix product Tj T2 ... TL. Normalize the corresponding eigenvector so that its components sum to unity. This vector is the solution for a~~. The eigenvalue is P~Y 1 ~ .
(iii) Form the succeeding at by the forward recursion set forth in eduatir~n (5).
(iv) Starting from ~L , initialized as above, form the /3l by the backward recursion set forth in equation (6).
(v) Form the ~,t as in (7), as well as other desired variables, such as, for example, the soft-decision output Pjcl t = n)Y 1 j or the matrix of probabilities a't, described hereinabove_ The inventors have shown that the unknown variable ~x~~
satisfies the matrix equation any TI T2 ... T L
a«- p~yl~
From the fact that this formula expresses a relationship among probabilities, we know that the product of Tt matrices on the right has largest eigenvalue equal - to P~YI ~ , and that the corresponding eigenvector must he a probability vector.
With the initial /jL = (111...1 )T equation (6) gives ~3L_I. Thus, repeated applications of this backward recursion give all the ~t . Once a// is known and ~L is set, all computations in the circular MAP decoder of the present invention follow the conventional MAP decoding algouthm.
SUBSTITUTE S~°ic.ET (BULB 26) . FIG. 4 is a simplified block diagram illustrating a circular MAP
decoder 110 for decoding an error-cowecting tail-biting trellis code in accordance with the eigenvector method described hereinahc>ve. Dec:oder 1 lt) comprises a Tr calculator 112 which calculates Tt as a function of the channel output yi. The Tt calculator receives as inputs the following ti-om a memory I30: the channel transition probability R(Yr, X), the probability pt(m.l r~~') that the encoder makes a transition from state rn.' to m. at time t., and the probability yr(Xirn;rn) that the encoder's output symbol is X given that the prwious encoder state is m' and the present encoder state is m. The T calculator calculates each element of Tr by summing over all possible encoder outputs X
in accordance with equation (2).
The calculated values of TI are provided to a man-ix product calculator 114 to form the matrix product Tj T2 ... TL using an identity matrix I16, e.g., received from memory, a switch 118 and a delay circuit 12(). At time t = 1, the identity matrix is applied as one input to the matrix prraduct calculator.
c-1 For each subsequent time from t = 2 to t = L, the matrix product ~ T'; gets i= t fed back via the delay circuit to the matt~ix product calculator. Then, ;.tt time t =
L, the resulting matrix product is provided via a switch I21 to a normalized eigenvector computer 122 which calculates the normalized eigenvecaor 2 0 corresponding to the largest eigenvalue of the matrix product input thereto.
With a « thus initialized> i.e., as this normalized eigenvector, the succeedi«g a r - vectors are determined recursively according to eduation (S) in a matrix product calculator 124 using a delay 12b and switch 128 circuitry, as shown.
Appropriate values of Tr nre retrieved from a memory 13(), and the:
I'~sllltlllg a t are then stored in memory I3().
The values of /3 t are determined in a matrix produce calculate>r 132 using a switch I34 and delay 136 circuitry according to equation (li).
Then, the probabilities A, r are calculated from the values oi' cx i and /3 l in an element-by-element product calculator 140 according to eduution (7). The valuca c>f ~, r 3 0 are provided to a decoded bit value probability calculator I St) which determines the probability that the j~' decoded bit at time t, d ~l , equals zero. This probability SUBSTITUTE SHEET (RULE 26) - IS -is provided to a threshold decision device I52 which implements the follr~wing decision rule: If the probability from calculator IS(.) is greater than 2 , then decide that the decoded hit is zero; if the probability is less than 2 , then decide that the decoded hit is one; if it equals 2 , then the decoded hit is randomly assigned the value () or 1. The output of the threshold decision device is the decoder's output hit at time t.
The probability that the decoded hit equals zero P(ch~ = l) I Y' J
is also shown in FIG. 4 as being provided to a soft output function hloc;k 154 for providing a function of the probability, i.e_, f(P(~~ = l) I Y~ J), such as, for example, the 1 - P(~J = l) I Y~
likelihoncl ratio =
p(~~ = 0 I Y~ l as the decoder's soft-decision output. Another useful function of P(chl = l) I
Y ~ J is the b ~ ~ 1 - P(~a~ = U I 1'll ~
In ~ likalahoncl ratio = to P(dt = 0 I Y~ J
25 Alternatively, a useful function for block 154 may simply be the identity function so that the soft output is just P(cl,~ = 0 I Y~ j.
An alternative embodiment of the circular MAP decc>dtr determines the state prohahil,ity distributions by a recursion method. In particular, in one embodiment (the dynamic convergence mothod), the 2 0 recursion continues until decoder convergence is detected . In this recursion (or - dynamic convergence) method, steps (ii) and (iii) of the eigenvector method described hereinahove are replaced as follows:
SUBSTITUTE SI-IELT RULE 26) i mi RD-24,994 (ii.a) Starting with an initial ao equal to (1/M, . . . , IlM), where M is the number of states in the trellis, calculate the forward recursion L times. Normalize the results so that the elements of each new at sum to unity. Retain all L at vectors.
(ii.b) Let ao equal aL from the previous step and, starting at t = 1 , calculate the first LW ~ at probability vectors again.
m That is, calculate ai(m)=~ort_~(t)Yr(i,m) for m = 0, 1, . . . , M-1 and t =
1,2, ..o . . . , Lw ; where Lw _ is a suitable minimum number of trellis stages.
mn mm Normalize as before. Retain only the most recent set of L a's found by the recursion in steps (ii.a) and (ii.b) and the aLwm~R found previously in step (ii.a).
(ii.c) Compare aLH,~n from step (ii.b) to the previously found set from step (ii.a). If the M corresponding elements of the new and old a~w , are within a mrn tolerance range, proceed to step (iv) set forth hereinabove. Otherwise, continue to step (ii.d).
(ii.d) Let t = t+1 and calculate at = at_~T. Normalize as before. Retain only the most recent set of L a's calculated and the- a~ found previously in step (ii.a).
(ii.e) Compare the new at's to the previously found set. If the M new and old at's are within a tolerance range, proceed to step (iv). Otherwise, continue with step (ii.d) if the two most recent vectors do not agree to within the tolerance range and if the number of recursions does not exceed a specified maximum (typically 2L); proceeding to step (iv) otherwise.
This method then continues with steps (iv) and (v) given hereinabove with respect to the eigenvector method to produce the soft-decision outputs and decoded output bits of the circular MAP decoder.
In another alternative embodiment of the circular MAP decoder as described in CA Patent Application No. 2,221,137, the recursion method described hercinabove is mo~d.ihed so that the decoder only needs to process a predetermined, fixed number of trellis stages for a second time, that is, a . predetermined wrap depth. This is advantageous for implementation purposes because the number of computations required for decoding is the same for every encoded trtessage block. Consequently, hardware and software complexities are reduced.
One way to estimate the required wrap depth for MAP decoding of a tail-biting convolutional code is io determine it from hardware or software experirtacnration, requiring that a circular MAP decoder with a variable wrap depth be implemented and experiments be conducted to measure the decoded bit error rate versus Eh/No for successively increasing wrap depths. The minimum decoder wrap depth that provides the minimum probability of decoded bit error for a specified Eb/No is found when further increases in wrap depth do not decrease the error probability.
If a decoded bit error raft that is greater than the minimum achievable at a~specified Ee/No is tolerable, it is possible to rrduce the required number of trellis stages processed by the circular MAP decoder. In particular, the wrap depth search described hereinabove tray simply be terminated when the desired average probability of bit error is obtained 2 0 Another way to determine the wrap depth for a given code is by using the code's distanct properties. To this end, it is necessary to define two distinct decoder decision depths. As used herein, the term "correct path"
refers to the sequenct of states or a path through the trellis that results from encoding a block of data bits. The teen "incorrect subset of a node" refers to the set of all 2 5 incorrect (trellis) branches out of a correct path node and their descendents.
Both the decision depths defined below depend on the convolutional encoder.
The decision depths arc defined as follows:
- (i) Define the forward decision depth for e-error correction, LF(e) , to be the first depth in the trellis at which all paths in the incorrect subset of a correct path 3 0 initial node, whether later merging to the corttct path. or not, lie more than a Hamming distance 2e from the correct path. The significance of LF(e) is that - I$ -if there are a or fewer errors forward of the initial node, and encoding is known to have begun there, then the decoder must decode correctly. A formal tabulation of forward decision depths for convolutional codes was provided by J.B. Anderson and K. Balachandnan in "Decision Depths of Convolutional Codes", IEEE Transactioru on Information Theory, vol. IT-35, pp. 455-59, March 1989. A number of properties of 1F(e) are disclosed in this reference and also by J.B. Anderson and S. Mohan in Source and Channel Coding - An Algorithmic Approach, Kluwer Academic Publishers, Norwell, MA, 1991.
Chief among these properties is that a simple linear relation exists between LF
and e; for example, with rate 1/2 codes, LF' is approximately 9.08e .
(ii) Next define the unmerged decision depth for e-ermr correction, LU(e) , to be the first depth in the trellis at which ali paths in the trellis that never touch the correct path lie more than a Hamming distance of 2e away from the correct path.
The significance of LU(e) for soft-decision circular MAP
decoding is that the probability of identifying a state on the actual transmitted path is high after the decoder processes LU(e) treiiis stages. Therefore, the minimum wrap depth for circular MAP decoding is LU(e). Calculations of the depth LU(e) show that it is always larger than LF(e) but that it obeys the same approximate law. This implies that the minimum wrap depth can be estimated as the forward decision depth LF(e) if the urirnergcd decision depth of a code is not known.
By finding the mininnum unmerged decision depth for a given encoder, we find the fewest number of trellis stages that must be processed by a 2 5 practical circular decoder that generates soft-decision outputs. An algorithm to find LF(e), the forward decision depth, was given by J.B. Anderson and K.
Balachandran in "Decision Depths of Convolutional Codes", cited her~einabove.
To find LU(e):
(i) Extend the code trellis from left to right, starting from all 3 0 trellis nodes simultaneously, except for the zero-state.
! il RD-24,994 (ii) At each level, delete any paths that merge to the correct (all-zero) path; do not extend any paths out of the correct (zero) state node.
(iii) At level k, find the least Hamming distance, or weight, among paths terminating at nodes at this level.
(iv) If this least distance exceeds 2e, stop. Then, L U(e) = k.
As described in CA Patent Application No. 2,221,137, experimentation via computer simulation lead to two unexpected results: (1) wrapped processing of ~3t improves decoder performance; and (2) the use of a wrap depth of L U(e) + LF(e) ~ 2 LF(e) improves performance significantly.
Hence, a preferred embodiment of the circular MAP decoder algorithm based on recursion comprises the following steps:
(i) Calculate T for t = l, 2, . . . L according to equation (2).
(ii) Starting with an initial ao equal to (1/M, . . . , IlM), where M is the number of states in the trellis, calculate the forward recursion of equation (5) (L + L,~,) times for a = 1, 2, . . . (L + Lw) where Lw is the decoder's wrap depth.
The trellis-level index t takes on the values ((u-1) mod L) + 1. When the decoder wraps around the received sequence of symbols from the channel, aL is treated as ao. Normalize the results so that the elements of each new at sum to unity. Retain the L most.recent a vectors found via this recursion.
(iii) Starting with an initial ~L equal to (l, . . . , 1)T, calculate the backward recursion of equation (6) (L + LW) times for a = 1, 2, . . . (L +
LW).
The trellis-level index t takes on the values L-(u mod L). When the decoder wraps around the received sequence, /31 is used as ,QL+I and T is used as Ti+r when calculating the new ~3~. Normalize the results so that the elements of each new /.it sum to unity. Again, retain the L most recent ~3 vectors found via this recursion.
The next step of this preferred recursion method is the same as step (v) set forth hereinabove with respect to the eigenvector method to produce the soft-decisions and decoded bits output by the circular MAP decoder.
-2(>-FIG. 5 is a simplified pluck diagram illustrating a circular MAP
decoder 18() in accordance with this prefen-ed emh0diment of the Present invention. Decoder 18t) comprises a Tt calculator 182 which calculates Tr as a function of the channel output yt. The channel outputs yl,...,yL are Provided to the Tt calculator via a switch 184. With the switch in the down position, L
channel output symbols are loaded into a Tt calculator 182 and a shift register 186 one at a time. Then, switch 184 is moved to the up position to allow the shift register to shift the first Lw received symbols into the Tr calculator again>
i.e., to provide circular processing. The Tt calculator receives as inputs ti-om memory 130 the channel transition Probability R(Yr, X), the probability pr(rnfrra') that the encoder makes a transition from state rn.' to rrr. at time t, and the probability qt(Xlrrt', rn) that the encoder's output symbol is X given that the previous encoder state 1S r97.' and the Present encoder state is nr.. The T
calculator calculates each element 0f T~ by summing over all pcrssihle encoder outputs X in accordance with equation (2).
The calculated values of Tr are provided to a matrix Product calculator 19() which multiplies the Tr matrix by the a r _1 mata-ix, which is provided recursively via a delay 192 and demultiplexer 194 circuit. Control signal CNTRL1 causes demultiplexer 194 to select cx ~~ from memory 196 as one input to matrix product calculator 19() when t. = 1. When 2 ~ t ~ L, control signal CNTRL1 causes demultiplexer 194 to select a ~-I from delay i J2 as one input to matrix pI'Odllct calculator I 9(). Values of Tr and a r are stored in memory 196 as required.
The Q t vectors are calculated recursively in a matrix Product calculator 200 via a delay 2()2 and demultiPlexer 2t)4 circuit. Control signal CNTRL2 causes demultiplexer 2()4 to select ~3L from memory 19(, as one input to matrix product calculator 2()0 when t = L-1. When L-2 ' t ? I, contrcal signal CNTRL2 causes demultiplexer 2()4 to select ~r+1 from delay I()2 as one input to matrix product calculator 2(X). The resulting values of /j r are '3 0 multiplied by the values of rx r , obtained from memory 196, in an e)ement-hy-element product calculator 2()6 to provide the pr0hahilities R, , as described hereinahove. In the same manner as described hereinahove with reference to FIG. 4, the values of ~, r are provided tc> a decoded hit value probability SUBSTITUTE Si~EET {RULE ?.6) calculator 150, the output of which is provided to a threshold decision device 152, resulting in the decoder's decoded output bits.
The conditional probability that the decoded hit eduals zero (P(dt = 0 I Y~ )) is also shown in FIG. 5 as being provided to a soft csutPut function block 154 for providing a function of the probability, SUBSTITUTE SHEET (RULE 2~
i.e., f(P(d~ = 0 l Y~J), such as, for example, the I - P(clt~ = U I Y~ l likelihvocl ratio =
P(cl~ = f) I Y~ J
as the decodei s soft-decision output. Another useful function of P(cli = U I Y~ J is the I - P(cl1 = U I 1't/ ) lng likelihoncl rcttzo = l ol,>
P(cl~ = U I Y ~ J
Alternatively, a useful function for block I54 may simply ho the icle:ntity function so that the soft output is just P(cl1 = ll I Y~ j .
In accordance with the present invention, ii is possible to increasrr the rate of the parallel concatenated coding scheme comprising tail-biting nonrecursive systematic codes by deleting selected hits in the composite codeword formed by the composite codeword formatter according to an advantageously chosen pattern Prior to transmitting the hits of the composite c0dew0rd over a channel. This technique is known as puncturing. This puncturing pattern is also known by the decoder. The t<lllowin~ simple additional step Performed by the received composite-cc>deword-to-c:omponent-codeword converter provides the desired decoder operation: the received composite-codeword-to-component-codeword converter merely lnsert_s a neutral value for each known punctured hit during the fc>nnation c>f the received component codewords. For example, the neutral value is for the: case of antipodal signalling over an additive white Gaussian noise channel. The rest of the decoder's operation is as described hereinahove.
It has heretofore been widely accepted that nc>nrecu t:sive systematic c0nvolutional codes would nclt he useful as the: ~11~17p()n~lll c:mi~s in SUBSTITUTE SHEET (RULE 26) .
a parallel concatenated coding scheme because of the superior distance properties of RSC codes for relatively large the data block lengths, as reported, for txample in S. Benedetto and G. Montorsi, "Design of Parallel Concatenated Convolutional Codes," IEEE Transactions on Communications, to be 5 published However, as described hereinabove, the inventors have determined that the minimum distance of a NSC code is less sensitive to data block length and, therefore, can be used advantageously in communication systems that transmit short blocks of data bits in very noisy channels. In addition, the inventors have determined that the use of tail-biting codes solves the problem of 1 o tet~rtinadon of input data sequences in turbo codes. Use of tail-biting convolutional codes as component codes in a parallel concatenated coding scheme has not been proposed heretofore. Hence, the present invention provides a parallel concatenated nonrecursive tail-biting systematic convolutional coding scheme with a decoder comprising circular MAP
Z 5 decoders for decoding the component tail-biting convolutional codes to provide better perfomnance for short data block lengths than in conventional turbo coding schemes, as measured in terms of bit error rate versus signal-to-noise TaLlB.
While the prefen~ed embodiments of the present invention have 2 0 been shown and described herein, it will be obvious that such embodiments are provided by way of example only. Numerous variations, changes and substitutions will occur to those of skill in the art without departing from the invention herein. Accordingly, it is intended that the invention be limited only by the spirit and scope of the appended claims.
GONVOLUTIONAL CODE AND DECODER THEREFOR
Field of the Invention The present invention relates generally to error-correction coding for the communication of short messages over poor channels and, more particularly, to a parallel concatenated convolutional coding technique and a decoder therefor.
Background of the Invention One form of parallel concatenated coding, referred to as either 20 parallel concatenated convolutional coding (PCCC) or "turbo coding", has been the subject of recent coding research due to impressive demonstrated coding gains when applied to blocks of 10,000 or more bits. (See C. Berrou, A.
Glavieux and P. Thitimajshima, "Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes," Proceedings of the IEEE International 3.5 Conference on Communications, 1993, pg. 1064-1070; J.D. Andersen, "The TURBO Coding Scheme," Report IT-I46 ISSN 0105-854, Institute of Telecommunication, Technical University of Denmark, December 1994; and P.
Robertson, "Illuminating the Structure of Code and Decoder of Parallel Concatenated Recursive Systematic (Turbo) Codes," 1994 IEEE Globecom - 20 Conference, pp. 1298-1303.) However, it has been shown that the performance of a turbo code degrades substantially as the length of the encoded data block decreases.
This effect is due to the strong dependence of its component recursive systematic convolutional codes' weight structures on block length. A second 25 issue is the proper termination of message blocks applied to a turbo encoder.
As described by O. Joersson and H. Meyr in "Terminating the Trellis of Turbo-Codes," IEE Electronics Letters, vol. 30, no. I 6, August 4, 1994, pp. 1285-1286, the interleaving used in turbo encoders may make it impossible to terminate both the interleaved and non-interleaved encoder input sequences with 3 0 a single set of tail bits. While it is possible to use a second tail sequence embedded into the message structure such that the encoder operating on the interleaved data sequence is property terminated, this doubles the overhead associated with encoder termination and reduces the effective code rate. An alternative is not to terminate one of the encoder sequences, but this degrades performance of the encoder/decoder system, particularly when applied to short messages. In '"Fetminating the Trcliis of Turbo-Codes in the Same State," IEE
Electronics Letters, 1995, vol. 3 l, no. I, .Ianuary 5, pp. 22-23, A.S.
Barbulescu and S.S. Pietrobon reported a method that places restrictions on the interleaves design in order to terminate two component recursive systematic convolutional (RSC) encoders with a single termination bit sequence. Their performance results show some degradation compared to performance obtained by terminating both encoders when an optimized interleaves is used. In addition, published bit-error rate (BER) versus energy-per-bit-to-noise-power-spectral-density ratio (Eb/No) data exhibie a flattening in BER over a range of EblNo values when RSC's are utilized in the turbo encoder.
Accordingly, it is desirable to provide an improved parallel concatenated coding technique for short data blocks.
Summary of th,~, Invention In accordance with the present invention, a parallel concatenated 2 0 convolutionai coding scheme utilizes tail-biting nonrecursive systematic convolutional (NSC) cods. The associated decoder iteratively utilizes circular maximum a posreriori (MAP) decoding to produce hard and soft decision outputs. Use of tail-biting codes solves the problem of termination of the input data sequences in turbo coding, thereby avoiding associated decoder 2 5 performance degradation for short messages. While NSC codes are generally weaker than rxursive systematic convolutional (RSC) codes having the same _ _ memory asymptotically as the data block length increases, the free distance of a NSC code is less sensitive to data block length. Hence, parallel concatenated coding with NSC codes will perform better than with RSC codes having the 3 0 same memory for messages that are shorter than some threshold data block brief Descn~ption of the I?rawines The features and advantages of the prexnt invention will become apparent from the following detailed description of the invention when ttad with the accompanying drawings in which:
FiG. 1 is a simplified block diagram illustrating a parallel concatenated encoder, F1G. 2 is a simplified block diagram illustrating a decoder for parallel concatenated codes;
FIG. 3 is a simpliFed block diagram illustrating a tail-biting, nonrccursive systematic convolutiona! encoder for use in the coding scheme according to the present invention;
FIG. 4 is a simplified block diagram illustrating a circular MAP
decoder useful as a component decoder in a decoder for a parallel concatenated convolutional coding scheme according to the present invention; and FIG. 5 is a simplified block diagram illusuating an alternative embodiment of a circular MAP decoder useful as a component decoder for a parallel concatenated convolutional coding scheme according to the prexnt invention Detailed Descritztion of the Invention 2 0 FTG. 1 is a general block diagram of encoder signal processing 10 for parallel concatenated coding schemes. It comprixs a plurality N of component encoders 12 which operate on blocks of data bits from a source.
_ _ The data blocks are permuted by interleaving algorithms via interleavers 14. As shown, there are N-1 interleavers for N encoders I2. Finally, the component 2 5 encoder outputs are combined into a single composite codeword by a composite codeword formatter 16. The composite codeword formatter is choxn to suit the characteristics of the channel, and it may be followed by a frame formatter chosen to suit the channel and communication system's channel access technique. The frame formatter may also insert other necessary overhead such as control bits and synchronization symbols.
A significant code rate advantage can be obtained in parallel concatenated coding if the componern codes are systematic codes. The codewords (output) generated by a systematic encoder include the original data bits provided as inputs to the encoder and additional parity bits. (The redundancy introduced by the parity bits is what yields a code's error correction capability.) Therefore, when systematic encoders are used in the parallel concatenated encoder shown in FIG. 1, the calewords produced by all 1 o component encoders I2 contain the input data bits. If formatter I6 forms a data packet or composite codeword comprising only the parity bits generated by each component encoder 12 and the block of information bits to be coded, a substantial improvement in the rate of the composite parallel concatenated code is realized by eliminating repetition of the information bits in the transmitted composite codeword. For example, if component encoder 1 and component encoder 2 of a parallel concatenated convolutionai code (PCCC) encoder comprising two component codes are both rate In codes, the composite parallel concatenated code rate is increased from 1/4 for nonsystematic component codes to I!3 when systematic component codes are used.
Parallel concatenated coding schemes which utilize recursive systematic convolutional (RSC) codes have been the recent topic of much research. These parallel concatenated convolutional codes (PCCCs) are also - commonly known in the literature as "turbo" codes. As noted hereinabove, it has been demonstrated that these PCCCs can achieve impressive performance in terms of bit error rate (BER) versus the energy per bit to noise power spectral density ratio (Eb/N~ for the case of relatively Large messages, i.e., ten thousand or more bits. However, it has also been shown that the coding gain obtained with turbo codes degrades significantly with decreasing data block size because the recursive systematic convolutional component codes' strengths are quite sensitive to data block length. On the other hand, the performance of a nonrecursive systematic tail-biting convolucional code is independent of data block length for most practical purposes; the obtainable performance degrades only if the block of data bits encoded is less than a minimum size that is determined by the NSC's decision depth properties.
FIG. 2 illustrates a general decoder 20 for parallel concatenated ~ codes in block diagram form. Decoder 2() comprises the following: a composite-codeword-to-component-codeword converter 22 which converts the composite codeword received from a channel to individual received codewords for each component decoder 24; N component decoders 24 corresponding to the N component encoders of FIG. 1; the same type (or same) interleavers 14 that are used in the parallel concatenated encoder (FIG. 1); and first and second deinterleavers 28 and 29, respectively, that each have a sequence reordering characteristic that is equivalent to a series concatenation of N-1 deinterleavers 30 corresponding to the N-1 interleavers used for encoding. The required ordering of these deinterleavers is shown in FIG. 2 and is the reverse of the ordering of the interleavers. The outputs of component decoders 24 are some type of soft-25 decision information on the estimated value of each data hit in the received codeword. Far example, the outputs of the component decoders may be a first function of the probabilities that the decoded bits are 0 or 1 conditioned on the received sequence of symbols from the channel. One example of slick a first function removes the influence of the conditional probability P(cl ~ = lJ f Y
l ) from a component decoder's soft-decision output that is inputted to a next sequential component decoder after an appropriate permutation, where P(chJ =
0 I Yi j is the probability that the j~ information hit at time t is () conditioned on the ja' (systematic) bit of the received channel output symbol Yt.
Alternatively, the soft-decision information outputted by component decoders 24 may be a function of the likelihood ratio _ _ ~ P(dt = I I YI J 1 - P(~ ~ = 0 I YI J
_ _ P(cl1 = 0 ! Yll'j P(d~ = l) f YI ) or as a function of the log-likelikelihood ratio log j ~ ( c tJ ) j .
SUBSTfTUTE ~SHG~T (F~UkB 26) _H_ As shown, the Nth component decoder has a second output, i.e., a second function of the conditional probabilities for the decoded bit values or likelihood ratios above. An example of this second function is the product of P(c r~ _ () I
YI J and the a priori probability that ~~ = U received from the previous component decoder.
The decoder for parallel concatenated codes operates iteratively in the following way. The first component decoder (decoder 1) calculates a set of soft-decision values for the sequence of information bits encoded by the first component encoder based on the received codeword and any o priori information about the transmitted information bits. In the first iteration, if there is no a priori information on the source statistics, it is assumed that the bits are equally likely to be () or 1 (i.e., P(bit = OJ = P(hit = 1 J = 1~Z ). The soft-decision values calculated by decoder 1 are then interleaved using the same type (ar same) interleaver that was used in the encoder to permute the block of data bits for the second encoder. These permuted soft-decision values and the corresponding received codeword comprise the inputs to the next component decoder (decoder 2). The permuted soft-decision values received from the preceding component decoder and interleaver are utilized by the next component decoder as a priori information about the data bit's to be dec:oded_ 2 0 The component decoders operate sequentially in this manner until the Nth decoder has calculated a set of soft-decision output's for the block of data bits that was encoded by the encoder. The next step is deinterleaving soft-decision values from the Nth decoder as described hereinabove. The first decoder then operates on the received codeword again using the new soft-decision values from the N~ decoder as its a priori information. Decoder eperatien proceeds in this manner for the desired number of iterations. At the conclusion <>f the final iteration, the sequence of values which are a second function ref the soft-decision outputs calculated by the Nth decoder is deinterleaved in order to return the data to the order in which it was received by the PCCC encoder. The 3 0 number of iterations can he a predetermined number, or it c;an he determined dynamically by detecting decoder convergence.
SUBSTITUTE ut-~E~T (f MULE 26~
The decoder provides soft-decision information which is a function of the probability P(d~t = 0 I Y 1J ; that is, the conditional probability that the j~ data hit in a k-hit symbol input to the encoder at time r is t>
given that the set of channel outputs YIL = (yl,...,y L j is received. In addition, the decoder may provide hard-decision information as a function of its sott-decision output through a decision device which implements a decision rule, such as:
~.1 = O
t P(d~t = 0 1 Ylj c ~l~=1 r That is, if P(d J = 0 I YL J > ~, then ~l ~ = 0; if P(dl = 0 I YL j < 1 , then il a r 1 2 c r r 2 r 20 = 1; otherwise, randomly assign clot the value t.) or 1.
Typical turbo decoders utilize either maximum o hnster-icsri (MAP) decoders, such as described by L.R. Bahl, J. Cocke, F. Jelinek and J.
Raviv in "Optimal Decoding of Linear Codes for Minimizing Symbol error Rate," IEEE Trccn.saction.s of lyfi~rrnrrtinn. Theory, March 1 )74, pp. 2K4-287, r~r soft output Viterbi algorithm (SOVA) decoders, as described by J. Hagenauer and P. Hoeher in "A Viterhi Algorithm with Soft-Decision Outputs and its Applications, 1989 IEEE Glubecnm. Conference, pp. 16$0-if R6. A MAP
decoder produces the probability that a decoded bit value is () or 1. t:)n the other hand, a SOVA decoder typically calculates the likelihood ratio P( decoded bit i,s I j P( decoded bit is 0 j for each decoded hit. It is apparent that this likelihood ratio can he obtained from P(decoded bit is OJ and vice versa using P(decorled bit a.s ()j = I -P(decoded bit is 1 j. Some computational advantage Ilas been found when either MAP or SOVA decoders work with the logarithm of likelihood ratios, 2 5 that is, h~ ~ P( decoded bit is 1 j ~-P( decoded bit i.s 0 j SUBSTITUTE SficET (EtULE 26.~
WO 97/40582 PCTlUS97/06129 -$-it has been demonstrated that the coding gain (error correction capability) obtained with turbo nodes degrades significantly with decreasing data block size. Several authors have attributed this behavior primarily to the properties of RSC codes. it has been shown that the distance property of a RSC code increases with increasing data block length. Conversely, the minimum distance of a RSC code decreases with decreasing data block length.
A second problem is the difficulty in terminating all RSC codes comprising a turbo coding scheme due to interleaving. Disadvantageously, the adverse effects resulting from a lack of sequence termination or the imposition of ttstrictions on interleaver design are significant and become even molt so with decreasing data block length.
In accordance with the present invention, the component codes in a paratlci concatenated convolurional coding scheme comprise tail-biting nonrecursive systematic convolutional codes. The use of such tail-biting codes solves the problem of tertninadon of the input data sequences in turbo coding, thereby avoiding decoder performance degradation for short messages.
Although NSC codes are generally weaker than RSC codes having the same memory, the free distance of a NSC code is less sensitive to data block length.
Hence, parallel concatenated coding with NSC codes will perform ixttcr than 2 0 with RSC codes having the same memory for messages that arc shorter than a predetermined threshold data block size. The performance cross-over point is a function of desired decoded bit error rate, code rate, and code memory.
FIG. 3 illustrates an example of rate = Il2, memory = ra tail-biting nonrccursive systematic convolutional encoder for use in the parallel 2 5 concatenated convolutional coding (PCCC) scheme of the prtsent invention.
For purposes of description, an (n, k, m) encoder denotes and encoder wherein the input symbols comprise k bits, the output symbols comprise n bits, and rrs = encoder memory in k-bit symbols. For purposes of illustration, FIG. 3 is drawn for binary input symbols, i.e., k = 1. However, the prtsent invention is 3 o applicable to any values of k, n ,and m.
Initially, a switch 50 is in the down position, and L input bits are shifted into a shift register 52, k at a time (one input symbol at a time for this i RD-24,994 example). After loading the Lth bit into the encoder, the switch moves to the up position and encoding begins with the shift of the first bit from a second shift register 54 into the nonrecursive systematic encoder; the state of the encoder at this time is ~bL, bL_I, . . . , bL-(km-1)~~ In this example, the encoder output comprises the current input bit and a parity bit formed in block 56 (shown as modulo 2 addition for this example) as a function of the encoder state and the current input symbol. Encoding ends when the Lth bit is encoded.
Another aspect of the present invention, the associated decoder for the hereinabove described parallel concatenated encoder, comprises a circular MAP decoder as described by the present inventors in commonly assigned, copending CA Patent Application No. 2,221,137. In particular, CA
Patent Application No. 2,221,137 describes a circular MAP decoder useful for decoding tail-biting convolutional codes. The circular MAP decoder can deliver both an estimate of the encoded data block and reliability information to a data sink, e.g., a speech synthesis signal processor for use in transmission error concealment or protocol processor for packet data as a measure of block error probability for use in repeat request decisions.
In particular, as described in CA Patent Application No. 2,221,137, a circular MAP decoder for error-correcting trellis codes that employ tail biting produces soft-decision outputs. The circular MAP decoder provides an estimate of the probabilities of the states in the first stage of the trellis, which .probabilities replace the a priori knowledge of the starting state in a conventional MAP decoder. The circular MAP decoder provides the initial state probability distribution in either of two ways. The first involves a solution to an eigenvalue problem for which the resulting eigenvector is the desired initial state probability distribution; with knowledge of the starting state, the circular MAP
decoder performs the rest of the decoding according to the conventional MAP
decoding algorithm. The second is based on a recursion for which the iterations converge to a starting state distribution. After sufficient iterations, a state on the circular sequence of states is known with high probability, and the circular MAP decoder performs the rest of the decoding according to the conventional MAP decoding algorithm.
WO 97!40582 PCT/LTS97/06129 The objective of the conventional MAP decoding algorithm is to find the conditional probabilities:
P(stctte m. at. time t 1 receive channel output..s yl,...,y~ .
The term L in this expression represents the length of the data block in units of the number of encoder symbols. (The encoder for an (n, k) code operates on k-bit input symbols to generate n-bit output symbols.) The term yt is the channel output (symbol) at time t.
The MAP decoding algorithm actually first fords the probabilities:
~,t(m) - P(St = m; YI j ; (1) that is, the joint probability that the encoder state at time t, St , is nz and the set of channel outputs YIL = (yl,...,yLj is received. These are the desired probabilities multiplied by a constant (P(YILJ, the probability of receiving the set of channel outputs (Yr,...,y~> .
Now define the elements of a matrix Tr by Tt(i,J) = P(.state. j at. time. t; yt I .state i at. time t-1 . j The matrix Tt is calculated as a function of the channel transition prc>bahility R(Yt, X), the probability pt(rnfm') that the encoder makes a transition from state rn'to m at time t, and the probability qt(Xim;m) that the encoder's output symbol is X given that the previous encoder state is m.' and the present encoder - state is m. In particular, each element of T is calculated by summing over all possible encoder outputs X as follows:
(2) Y(m:m) _ ~Pt(rnftn') qt(Xlm.',nz) R(Yr, X) -X
SUBSTITUTE SHEET (RULE 26).
_iI_ The MAP decoder calculates L of these matrices, one for each trellis stage.
They are formed from the received channel output symbols and the nature of the trellis branches for a given code.
Next define the M joint probability elements of a row vector a t by (xt(j) = P(state j at time t.; yl,...,ytj (3) and the M conditional probability elements of a column vector ~it by ~t(1~ = P(yt+hwyL ~ ~state j at tune t.) (4) for j = 0,1 >..., (M 1 ) where M is the number of encoder states. (Note that matrices and vectors are denoted herein by using boldface type.) The steps of the MAP decoding algorithm are as follows:
(i) Calculate al, ..., aL by the forward recursion:
at = at_I ht , t=I,...,L . (5) (ii) Calculate ~l, ..., ~L-I by the backward recursion:
- ~t = rt+1 ~t+1 ~ t = L-1,...,1 . (6) 2 0 (iii) Calculate the elements of ~, t by:
at(t) = txt(i) ~3t(i) , all i, t.=I,...,L . (7) (iv) Find related quantities as needed. For example, let A i be the set of states - St = (S t, S t, ..., S~mJ such that the j0' element of St, S~t, is equal to zero. For a conventional non-recursive trellis code, S t = dJt, the j0' data hit at time r..
Therefore, the decoder's soft-decision output is SUBSTITUTE S~IcET (RULE 2b~
P(cl t = OiYlJ P YLl ~ a,t(na) ( 1 SreA~~ ' where P(Yl J = ~~,L(m) and m m is the index that corresponds to a state St.
The decoder's hard-decision or decoded hit output is obtained by applying P(~ = OI Y IJ to the following decision rule:
~l ~ -- o r P(clt=OI Ylf cl~-I
t That is, if P(c~t = 0 I Y ~) > 2, then il t = 0; if P(cht = 0 I Y 1 J < 2 , then il 1-1; otherwise, randomly assign car the value () or 1.
As another example of a related quantity for step (iv) hereinahove, the matt-ix of probabilities car comprises elements defined as follows:
ar (1> .1) = P{Sc-1 = i; Sc =.1; Y 1 } = ar_I (i) Yr (i> .t) /fir U) These probabilities are useful when it is desired to determine the a pv.sterio'-i probability of the encoder output bits.
In the standard application of the MAP decoding algorithm, the 25 forward recursion is initialized by the vector a/~ _ (1,0,._..0), and the backward - recursion is initialized by ~L = (1,0,...0)T . These initial conditions are based on assumptions that the encoder's initial state S~ = 0 and its ending state SL =
lJ.
One embodiment of the circular MAP decoder determines the initial state probability distribution by solving an eigenvalue problem as 2 0 follows. Let ar, /3r, Tt and ilr he as before, hut take the initial ~x~~
and ~iL as follows:
SUBSTITUTE SE-II=ET (RULE 26) Set ~L to the column vector (111...1)T.
- Let a~~ be an unknown (vector) variable.
Then, (i) Calculate Tt for t = 1, 2, ... L according to equation (2).
(ii) Find the largest eigenvalue of the matrix product Tj T2 ... TL. Normalize the corresponding eigenvector so that its components sum to unity. This vector is the solution for a~~. The eigenvalue is P~Y 1 ~ .
(iii) Form the succeeding at by the forward recursion set forth in eduatir~n (5).
(iv) Starting from ~L , initialized as above, form the /3l by the backward recursion set forth in equation (6).
(v) Form the ~,t as in (7), as well as other desired variables, such as, for example, the soft-decision output Pjcl t = n)Y 1 j or the matrix of probabilities a't, described hereinabove_ The inventors have shown that the unknown variable ~x~~
satisfies the matrix equation any TI T2 ... T L
a«- p~yl~
From the fact that this formula expresses a relationship among probabilities, we know that the product of Tt matrices on the right has largest eigenvalue equal - to P~YI ~ , and that the corresponding eigenvector must he a probability vector.
With the initial /jL = (111...1 )T equation (6) gives ~3L_I. Thus, repeated applications of this backward recursion give all the ~t . Once a// is known and ~L is set, all computations in the circular MAP decoder of the present invention follow the conventional MAP decoding algouthm.
SUBSTITUTE S~°ic.ET (BULB 26) . FIG. 4 is a simplified block diagram illustrating a circular MAP
decoder 110 for decoding an error-cowecting tail-biting trellis code in accordance with the eigenvector method described hereinahc>ve. Dec:oder 1 lt) comprises a Tr calculator 112 which calculates Tt as a function of the channel output yi. The Tt calculator receives as inputs the following ti-om a memory I30: the channel transition probability R(Yr, X), the probability pt(m.l r~~') that the encoder makes a transition from state rn.' to m. at time t., and the probability yr(Xirn;rn) that the encoder's output symbol is X given that the prwious encoder state is m' and the present encoder state is m. The T calculator calculates each element of Tr by summing over all possible encoder outputs X
in accordance with equation (2).
The calculated values of TI are provided to a man-ix product calculator 114 to form the matrix product Tj T2 ... TL using an identity matrix I16, e.g., received from memory, a switch 118 and a delay circuit 12(). At time t = 1, the identity matrix is applied as one input to the matrix prraduct calculator.
c-1 For each subsequent time from t = 2 to t = L, the matrix product ~ T'; gets i= t fed back via the delay circuit to the matt~ix product calculator. Then, ;.tt time t =
L, the resulting matrix product is provided via a switch I21 to a normalized eigenvector computer 122 which calculates the normalized eigenvecaor 2 0 corresponding to the largest eigenvalue of the matrix product input thereto.
With a « thus initialized> i.e., as this normalized eigenvector, the succeedi«g a r - vectors are determined recursively according to eduation (S) in a matrix product calculator 124 using a delay 12b and switch 128 circuitry, as shown.
Appropriate values of Tr nre retrieved from a memory 13(), and the:
I'~sllltlllg a t are then stored in memory I3().
The values of /3 t are determined in a matrix produce calculate>r 132 using a switch I34 and delay 136 circuitry according to equation (li).
Then, the probabilities A, r are calculated from the values oi' cx i and /3 l in an element-by-element product calculator 140 according to eduution (7). The valuca c>f ~, r 3 0 are provided to a decoded bit value probability calculator I St) which determines the probability that the j~' decoded bit at time t, d ~l , equals zero. This probability SUBSTITUTE SHEET (RULE 26) - IS -is provided to a threshold decision device I52 which implements the follr~wing decision rule: If the probability from calculator IS(.) is greater than 2 , then decide that the decoded hit is zero; if the probability is less than 2 , then decide that the decoded hit is one; if it equals 2 , then the decoded hit is randomly assigned the value () or 1. The output of the threshold decision device is the decoder's output hit at time t.
The probability that the decoded hit equals zero P(ch~ = l) I Y' J
is also shown in FIG. 4 as being provided to a soft output function hloc;k 154 for providing a function of the probability, i.e_, f(P(~~ = l) I Y~ J), such as, for example, the 1 - P(~J = l) I Y~
likelihoncl ratio =
p(~~ = 0 I Y~ l as the decoder's soft-decision output. Another useful function of P(chl = l) I
Y ~ J is the b ~ ~ 1 - P(~a~ = U I 1'll ~
In ~ likalahoncl ratio = to P(dt = 0 I Y~ J
25 Alternatively, a useful function for block 154 may simply be the identity function so that the soft output is just P(cl,~ = 0 I Y~ j.
An alternative embodiment of the circular MAP decc>dtr determines the state prohahil,ity distributions by a recursion method. In particular, in one embodiment (the dynamic convergence mothod), the 2 0 recursion continues until decoder convergence is detected . In this recursion (or - dynamic convergence) method, steps (ii) and (iii) of the eigenvector method described hereinahove are replaced as follows:
SUBSTITUTE SI-IELT RULE 26) i mi RD-24,994 (ii.a) Starting with an initial ao equal to (1/M, . . . , IlM), where M is the number of states in the trellis, calculate the forward recursion L times. Normalize the results so that the elements of each new at sum to unity. Retain all L at vectors.
(ii.b) Let ao equal aL from the previous step and, starting at t = 1 , calculate the first LW ~ at probability vectors again.
m That is, calculate ai(m)=~ort_~(t)Yr(i,m) for m = 0, 1, . . . , M-1 and t =
1,2, ..o . . . , Lw ; where Lw _ is a suitable minimum number of trellis stages.
mn mm Normalize as before. Retain only the most recent set of L a's found by the recursion in steps (ii.a) and (ii.b) and the aLwm~R found previously in step (ii.a).
(ii.c) Compare aLH,~n from step (ii.b) to the previously found set from step (ii.a). If the M corresponding elements of the new and old a~w , are within a mrn tolerance range, proceed to step (iv) set forth hereinabove. Otherwise, continue to step (ii.d).
(ii.d) Let t = t+1 and calculate at = at_~T. Normalize as before. Retain only the most recent set of L a's calculated and the- a~ found previously in step (ii.a).
(ii.e) Compare the new at's to the previously found set. If the M new and old at's are within a tolerance range, proceed to step (iv). Otherwise, continue with step (ii.d) if the two most recent vectors do not agree to within the tolerance range and if the number of recursions does not exceed a specified maximum (typically 2L); proceeding to step (iv) otherwise.
This method then continues with steps (iv) and (v) given hereinabove with respect to the eigenvector method to produce the soft-decision outputs and decoded output bits of the circular MAP decoder.
In another alternative embodiment of the circular MAP decoder as described in CA Patent Application No. 2,221,137, the recursion method described hercinabove is mo~d.ihed so that the decoder only needs to process a predetermined, fixed number of trellis stages for a second time, that is, a . predetermined wrap depth. This is advantageous for implementation purposes because the number of computations required for decoding is the same for every encoded trtessage block. Consequently, hardware and software complexities are reduced.
One way to estimate the required wrap depth for MAP decoding of a tail-biting convolutional code is io determine it from hardware or software experirtacnration, requiring that a circular MAP decoder with a variable wrap depth be implemented and experiments be conducted to measure the decoded bit error rate versus Eh/No for successively increasing wrap depths. The minimum decoder wrap depth that provides the minimum probability of decoded bit error for a specified Eb/No is found when further increases in wrap depth do not decrease the error probability.
If a decoded bit error raft that is greater than the minimum achievable at a~specified Ee/No is tolerable, it is possible to rrduce the required number of trellis stages processed by the circular MAP decoder. In particular, the wrap depth search described hereinabove tray simply be terminated when the desired average probability of bit error is obtained 2 0 Another way to determine the wrap depth for a given code is by using the code's distanct properties. To this end, it is necessary to define two distinct decoder decision depths. As used herein, the term "correct path"
refers to the sequenct of states or a path through the trellis that results from encoding a block of data bits. The teen "incorrect subset of a node" refers to the set of all 2 5 incorrect (trellis) branches out of a correct path node and their descendents.
Both the decision depths defined below depend on the convolutional encoder.
The decision depths arc defined as follows:
- (i) Define the forward decision depth for e-error correction, LF(e) , to be the first depth in the trellis at which all paths in the incorrect subset of a correct path 3 0 initial node, whether later merging to the corttct path. or not, lie more than a Hamming distance 2e from the correct path. The significance of LF(e) is that - I$ -if there are a or fewer errors forward of the initial node, and encoding is known to have begun there, then the decoder must decode correctly. A formal tabulation of forward decision depths for convolutional codes was provided by J.B. Anderson and K. Balachandnan in "Decision Depths of Convolutional Codes", IEEE Transactioru on Information Theory, vol. IT-35, pp. 455-59, March 1989. A number of properties of 1F(e) are disclosed in this reference and also by J.B. Anderson and S. Mohan in Source and Channel Coding - An Algorithmic Approach, Kluwer Academic Publishers, Norwell, MA, 1991.
Chief among these properties is that a simple linear relation exists between LF
and e; for example, with rate 1/2 codes, LF' is approximately 9.08e .
(ii) Next define the unmerged decision depth for e-ermr correction, LU(e) , to be the first depth in the trellis at which ali paths in the trellis that never touch the correct path lie more than a Hamming distance of 2e away from the correct path.
The significance of LU(e) for soft-decision circular MAP
decoding is that the probability of identifying a state on the actual transmitted path is high after the decoder processes LU(e) treiiis stages. Therefore, the minimum wrap depth for circular MAP decoding is LU(e). Calculations of the depth LU(e) show that it is always larger than LF(e) but that it obeys the same approximate law. This implies that the minimum wrap depth can be estimated as the forward decision depth LF(e) if the urirnergcd decision depth of a code is not known.
By finding the mininnum unmerged decision depth for a given encoder, we find the fewest number of trellis stages that must be processed by a 2 5 practical circular decoder that generates soft-decision outputs. An algorithm to find LF(e), the forward decision depth, was given by J.B. Anderson and K.
Balachandran in "Decision Depths of Convolutional Codes", cited her~einabove.
To find LU(e):
(i) Extend the code trellis from left to right, starting from all 3 0 trellis nodes simultaneously, except for the zero-state.
! il RD-24,994 (ii) At each level, delete any paths that merge to the correct (all-zero) path; do not extend any paths out of the correct (zero) state node.
(iii) At level k, find the least Hamming distance, or weight, among paths terminating at nodes at this level.
(iv) If this least distance exceeds 2e, stop. Then, L U(e) = k.
As described in CA Patent Application No. 2,221,137, experimentation via computer simulation lead to two unexpected results: (1) wrapped processing of ~3t improves decoder performance; and (2) the use of a wrap depth of L U(e) + LF(e) ~ 2 LF(e) improves performance significantly.
Hence, a preferred embodiment of the circular MAP decoder algorithm based on recursion comprises the following steps:
(i) Calculate T for t = l, 2, . . . L according to equation (2).
(ii) Starting with an initial ao equal to (1/M, . . . , IlM), where M is the number of states in the trellis, calculate the forward recursion of equation (5) (L + L,~,) times for a = 1, 2, . . . (L + Lw) where Lw is the decoder's wrap depth.
The trellis-level index t takes on the values ((u-1) mod L) + 1. When the decoder wraps around the received sequence of symbols from the channel, aL is treated as ao. Normalize the results so that the elements of each new at sum to unity. Retain the L most.recent a vectors found via this recursion.
(iii) Starting with an initial ~L equal to (l, . . . , 1)T, calculate the backward recursion of equation (6) (L + LW) times for a = 1, 2, . . . (L +
LW).
The trellis-level index t takes on the values L-(u mod L). When the decoder wraps around the received sequence, /31 is used as ,QL+I and T is used as Ti+r when calculating the new ~3~. Normalize the results so that the elements of each new /.it sum to unity. Again, retain the L most recent ~3 vectors found via this recursion.
The next step of this preferred recursion method is the same as step (v) set forth hereinabove with respect to the eigenvector method to produce the soft-decisions and decoded bits output by the circular MAP decoder.
-2(>-FIG. 5 is a simplified pluck diagram illustrating a circular MAP
decoder 18() in accordance with this prefen-ed emh0diment of the Present invention. Decoder 18t) comprises a Tt calculator 182 which calculates Tr as a function of the channel output yt. The channel outputs yl,...,yL are Provided to the Tt calculator via a switch 184. With the switch in the down position, L
channel output symbols are loaded into a Tt calculator 182 and a shift register 186 one at a time. Then, switch 184 is moved to the up position to allow the shift register to shift the first Lw received symbols into the Tr calculator again>
i.e., to provide circular processing. The Tt calculator receives as inputs ti-om memory 130 the channel transition Probability R(Yr, X), the probability pr(rnfrra') that the encoder makes a transition from state rn.' to rrr. at time t, and the probability qt(Xlrrt', rn) that the encoder's output symbol is X given that the previous encoder state 1S r97.' and the Present encoder state is nr.. The T
calculator calculates each element 0f T~ by summing over all pcrssihle encoder outputs X in accordance with equation (2).
The calculated values of Tr are provided to a matrix Product calculator 19() which multiplies the Tr matrix by the a r _1 mata-ix, which is provided recursively via a delay 192 and demultiplexer 194 circuit. Control signal CNTRL1 causes demultiplexer 194 to select cx ~~ from memory 196 as one input to matrix product calculator 19() when t. = 1. When 2 ~ t ~ L, control signal CNTRL1 causes demultiplexer 194 to select a ~-I from delay i J2 as one input to matrix pI'Odllct calculator I 9(). Values of Tr and a r are stored in memory 196 as required.
The Q t vectors are calculated recursively in a matrix Product calculator 200 via a delay 2()2 and demultiPlexer 2t)4 circuit. Control signal CNTRL2 causes demultiplexer 2()4 to select ~3L from memory 19(, as one input to matrix product calculator 2()0 when t = L-1. When L-2 ' t ? I, contrcal signal CNTRL2 causes demultiplexer 2()4 to select ~r+1 from delay I()2 as one input to matrix product calculator 2(X). The resulting values of /j r are '3 0 multiplied by the values of rx r , obtained from memory 196, in an e)ement-hy-element product calculator 2()6 to provide the pr0hahilities R, , as described hereinahove. In the same manner as described hereinahove with reference to FIG. 4, the values of ~, r are provided tc> a decoded hit value probability SUBSTITUTE Si~EET {RULE ?.6) calculator 150, the output of which is provided to a threshold decision device 152, resulting in the decoder's decoded output bits.
The conditional probability that the decoded hit eduals zero (P(dt = 0 I Y~ )) is also shown in FIG. 5 as being provided to a soft csutPut function block 154 for providing a function of the probability, SUBSTITUTE SHEET (RULE 2~
i.e., f(P(d~ = 0 l Y~J), such as, for example, the I - P(clt~ = U I Y~ l likelihvocl ratio =
P(cl~ = f) I Y~ J
as the decodei s soft-decision output. Another useful function of P(cli = U I Y~ J is the I - P(cl1 = U I 1't/ ) lng likelihoncl rcttzo = l ol,>
P(cl~ = U I Y ~ J
Alternatively, a useful function for block I54 may simply ho the icle:ntity function so that the soft output is just P(cl1 = ll I Y~ j .
In accordance with the present invention, ii is possible to increasrr the rate of the parallel concatenated coding scheme comprising tail-biting nonrecursive systematic codes by deleting selected hits in the composite codeword formed by the composite codeword formatter according to an advantageously chosen pattern Prior to transmitting the hits of the composite c0dew0rd over a channel. This technique is known as puncturing. This puncturing pattern is also known by the decoder. The t<lllowin~ simple additional step Performed by the received composite-cc>deword-to-c:omponent-codeword converter provides the desired decoder operation: the received composite-codeword-to-component-codeword converter merely lnsert_s a neutral value for each known punctured hit during the fc>nnation c>f the received component codewords. For example, the neutral value is for the: case of antipodal signalling over an additive white Gaussian noise channel. The rest of the decoder's operation is as described hereinahove.
It has heretofore been widely accepted that nc>nrecu t:sive systematic c0nvolutional codes would nclt he useful as the: ~11~17p()n~lll c:mi~s in SUBSTITUTE SHEET (RULE 26) .
a parallel concatenated coding scheme because of the superior distance properties of RSC codes for relatively large the data block lengths, as reported, for txample in S. Benedetto and G. Montorsi, "Design of Parallel Concatenated Convolutional Codes," IEEE Transactions on Communications, to be 5 published However, as described hereinabove, the inventors have determined that the minimum distance of a NSC code is less sensitive to data block length and, therefore, can be used advantageously in communication systems that transmit short blocks of data bits in very noisy channels. In addition, the inventors have determined that the use of tail-biting codes solves the problem of 1 o tet~rtinadon of input data sequences in turbo codes. Use of tail-biting convolutional codes as component codes in a parallel concatenated coding scheme has not been proposed heretofore. Hence, the present invention provides a parallel concatenated nonrecursive tail-biting systematic convolutional coding scheme with a decoder comprising circular MAP
Z 5 decoders for decoding the component tail-biting convolutional codes to provide better perfomnance for short data block lengths than in conventional turbo coding schemes, as measured in terms of bit error rate versus signal-to-noise TaLlB.
While the prefen~ed embodiments of the present invention have 2 0 been shown and described herein, it will be obvious that such embodiments are provided by way of example only. Numerous variations, changes and substitutions will occur to those of skill in the art without departing from the invention herein. Accordingly, it is intended that the invention be limited only by the spirit and scope of the appended claims.
Claims (36)
1. A method for parallel concatenated convolutional encoding, comprising the steps of:
providing a block of data bits to a parallel concatenated encoder comprising a plurality of N component encoders and N-1 interleavers connected in a parallel concatenation;
encoding the block of data bits in a first one of the component encoders by applying a tail-biting nonrecursive systematic convolutional code thereto and thereby generating a corresponding first component codeword comprising the data bits and parity bits;
interleaving the block of data bits to provide a permuted block of data bits;
encoding the resulting permuted block of data bits in a subsequent component encoder by applying the tail-biting nonrecursive systematic convolutional code thereto, and thereby generating a corresponding second component codeword comprising the data bits and parity bits;
repeating the steps of interleaving and encoding the resulting permuted block of data bits through the remaining N-2 interleavers and the remaining N-2 component encoders, and thereby generating component codewords comprising the data bits and parity bits; and formatting the bits of the component codewords to provide a composite codeword.
providing a block of data bits to a parallel concatenated encoder comprising a plurality of N component encoders and N-1 interleavers connected in a parallel concatenation;
encoding the block of data bits in a first one of the component encoders by applying a tail-biting nonrecursive systematic convolutional code thereto and thereby generating a corresponding first component codeword comprising the data bits and parity bits;
interleaving the block of data bits to provide a permuted block of data bits;
encoding the resulting permuted block of data bits in a subsequent component encoder by applying the tail-biting nonrecursive systematic convolutional code thereto, and thereby generating a corresponding second component codeword comprising the data bits and parity bits;
repeating the steps of interleaving and encoding the resulting permuted block of data bits through the remaining N-2 interleavers and the remaining N-2 component encoders, and thereby generating component codewords comprising the data bits and parity bits; and formatting the bits of the component codewords to provide a composite codeword.
2. The method of claim 1 wherein the formatting step is performed such that the composite codeword includes only one occurrence of each bit in the block of data bits.
3. The method of claim 1 wherein the formatting step is performed such that the composite codeword includes only selected ones of the bits comprising the component codewords according to a predetermined pattern.
4. A method for decoding parallel concatenated convolutional codes, comprising the steps of:
receiving from a channel a composite codeword that comprises a formatted collection of bits from a plurality (N) of component codewords that have been generated by applying tail-biting nonrecursive systematic convolutional codes to a block of data bits in a parallel concatenated encoder, forming received component codewords from the received composite codeword, each respective received component codeword being received by a corresponding one of a plurality of N component decoders of a composite decoder, each respective component decoder also receiving a set of a priori soft-decision information for values of the data bits;
decoding the received component codewords by a process of iterations through the N component decoders and N-1 interleavers to provide soft-decision outputs from the composite decoder, the N component decoders each providing soft-decision information on each data bit in the data block in the order encoded by the corresponding component encoder, the N-1 interleavers each interleaving the soft-decision information from a preceding component decoder to provide a permuted block of soft information to a subsequent component decoder, the set of a priori soft-decision information for the first of the N component decoders being calculated assuming that the data bits' values are equally likely for the first iteration and thereafter comprising a first function of the soft-decision information, which first function of the soft-decision information is fed back from the Nth component decoder via a first deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the first deinterleaver being applied in reverse order to the N-1 interleavers, the set of a priori soft-decision information provided to each other component decoder comprising the first function of the soft-decision information from the previous sequential component decoder; and deinterleaving in a second deinterleaver to provide a second function of the soft-decision output from the N th component decoder as the composite decoder's soft-decision output using N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the second deinterleaver being applied in reverse order to the N-1 interleavers.
receiving from a channel a composite codeword that comprises a formatted collection of bits from a plurality (N) of component codewords that have been generated by applying tail-biting nonrecursive systematic convolutional codes to a block of data bits in a parallel concatenated encoder, forming received component codewords from the received composite codeword, each respective received component codeword being received by a corresponding one of a plurality of N component decoders of a composite decoder, each respective component decoder also receiving a set of a priori soft-decision information for values of the data bits;
decoding the received component codewords by a process of iterations through the N component decoders and N-1 interleavers to provide soft-decision outputs from the composite decoder, the N component decoders each providing soft-decision information on each data bit in the data block in the order encoded by the corresponding component encoder, the N-1 interleavers each interleaving the soft-decision information from a preceding component decoder to provide a permuted block of soft information to a subsequent component decoder, the set of a priori soft-decision information for the first of the N component decoders being calculated assuming that the data bits' values are equally likely for the first iteration and thereafter comprising a first function of the soft-decision information, which first function of the soft-decision information is fed back from the Nth component decoder via a first deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the first deinterleaver being applied in reverse order to the N-1 interleavers, the set of a priori soft-decision information provided to each other component decoder comprising the first function of the soft-decision information from the previous sequential component decoder; and deinterleaving in a second deinterleaver to provide a second function of the soft-decision output from the N th component decoder as the composite decoder's soft-decision output using N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the second deinterleaver being applied in reverse order to the N-1 interleavers.
5. The method of claim 4 wherein the number of iterations through the component decoders, interleavers and deinterleavers is a predetermined number.
6. The method of claim 4 wherein the iterations through the component decoders, interleavers and deinterleavers continues until decoder convergence is detected if the number of iterations is less than a maximum number; otherwise decoding terminates after the maximum number of iterations, and the composite decoder provides the second function of soft-decision outputs from the N th component decoder as its soft-decision output via the second deinterleaver.
7. The method of claim 4, further comprising the step of implementing a decision rule to provide hard-decision outputs as a function of the composite decoder's soft-decision output.
8. The method of claim 4 wherein the formatted collection of bits has been punctured according to a predetermined pattern, the method for decoding further comprising the step of inserting neutral values for all punctured bits when forming the received component codewords.
9. The method of claim 4 wherein the decoding step is performed by the N component decoders comprising circular MAP decoders, the decoding step comprising solving an eigenvector problem.
10. The method of claim 4 wherein the decoding step is performed by the N component decoders comprising circular MAP decoders, the decoding step comprising a recursion method.
11. A method for encoding and decoding parallel concatenated convolutional codes, comprising the steps of:
providing a block of data bits to a parallel concatenated encoder comprising a plurality of N component encoders and N-1 interleavers connected in a parallel concatenation;
encoding the block of data bits in a first one of the component encoders by applying a tail-biting nonrecursive systematic convolutional code thereto and thereby generating a corresponding first component codeword comprising the data bits and parity bits;
interleaving the block of data bits to provide a permuted block of data bits;
encoding the resulting permuted block of data bits in a subsequent component encoder by applying the tail-biting nonrecursive systematic convolutional code thereto and thereby generating a corresponding second component codeword comprising the data bits and parity bits;
repeating the steps of interleaving and encoding the resulting permuted block of data bits through the remaining N-2 interleavers and the remaining N-2 component encoders, and thereby generating component codewords comprising the data bits and parity bits;
formatting the bits of the component codewords to provide a composite codeword;
inputting the composite codeword to a channel;
receiving a received composite codeword from the channel;
forming received component codewords from the received composite codeword providing each respective received component codeword to a corresponding one of a plurality of N component decoders of a composite decoder, each respective component decoder also receiving a set of a priori probabilities for values of the data bits;
decoding the received component codewords by a process of iterations through the N component decoders and N-1 interleavers to provide soft-decision outputs from the composite decoder, the N component decoders each providing soft-decision information on each data bit in the data block in the order encoded by the corresponding component encoder, the N-1 interleavers each interleaving the soft-decision information from a preceding component decoder to provide a permuted block of soft information to a subsequent component decoder, the set of a priori soft-decision information for the first of the N component decoders being calculated assuming that the data bits' values are equally likely for the first iteration and thereafter comprising a first function of the soft-decision information, which first function of the soft-decision information is fed back from the N th decoder via a first deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the first deinterleaver being applied in reverse order to the N-1 interleavers, the set of a priori soft-decision information provided to each other component decoder comprising the first function of the soft-decision information from the previous sequential component decoder; and deinterleaving in a second deinterleaver to provide a second function of the soft-decision output from the N th component decoder as the composite decoder's soft-decision output using N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the second deinterleaver being applied in reverse order to the N-1 interleavers.
providing a block of data bits to a parallel concatenated encoder comprising a plurality of N component encoders and N-1 interleavers connected in a parallel concatenation;
encoding the block of data bits in a first one of the component encoders by applying a tail-biting nonrecursive systematic convolutional code thereto and thereby generating a corresponding first component codeword comprising the data bits and parity bits;
interleaving the block of data bits to provide a permuted block of data bits;
encoding the resulting permuted block of data bits in a subsequent component encoder by applying the tail-biting nonrecursive systematic convolutional code thereto and thereby generating a corresponding second component codeword comprising the data bits and parity bits;
repeating the steps of interleaving and encoding the resulting permuted block of data bits through the remaining N-2 interleavers and the remaining N-2 component encoders, and thereby generating component codewords comprising the data bits and parity bits;
formatting the bits of the component codewords to provide a composite codeword;
inputting the composite codeword to a channel;
receiving a received composite codeword from the channel;
forming received component codewords from the received composite codeword providing each respective received component codeword to a corresponding one of a plurality of N component decoders of a composite decoder, each respective component decoder also receiving a set of a priori probabilities for values of the data bits;
decoding the received component codewords by a process of iterations through the N component decoders and N-1 interleavers to provide soft-decision outputs from the composite decoder, the N component decoders each providing soft-decision information on each data bit in the data block in the order encoded by the corresponding component encoder, the N-1 interleavers each interleaving the soft-decision information from a preceding component decoder to provide a permuted block of soft information to a subsequent component decoder, the set of a priori soft-decision information for the first of the N component decoders being calculated assuming that the data bits' values are equally likely for the first iteration and thereafter comprising a first function of the soft-decision information, which first function of the soft-decision information is fed back from the N th decoder via a first deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the first deinterleaver being applied in reverse order to the N-1 interleavers, the set of a priori soft-decision information provided to each other component decoder comprising the first function of the soft-decision information from the previous sequential component decoder; and deinterleaving in a second deinterleaver to provide a second function of the soft-decision output from the N th component decoder as the composite decoder's soft-decision output using N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the second deinterleaver being applied in reverse order to the N-1 interleavers.
12. The method of claim 11 wherein the formatting step is performed such that the composite codeword includes only one occurrence of each bit in the block of data bits.
13. The method of claim 11 wherein the formatting step is performed such that the composite codeword includes only selected ones of the bits comprising the component codewords according to a predetermined pattern.
14. The method of claim 11 wherein the number of iterations through the component decoders, interleavers and deinterleavers is a predetermined number.
15. The method of claim 11 wherein the iterations through the component decoders, N-1 interleavers and deinterleavers continues until decoder convergence is detected if the number of iterations is less than a maximum number; otherwise decoding terminates after the maximum number of iterations, and the composite decoder provides the second function of soft-decision outputs from the N th component decoder as its soft-decision output via the second deinterleaver.
16. The method of claim 11, further comprising the step of implementing a decision rule to provide hard-decision outputs as a function of the composite decoder's soft-decision output.
17. The method of claim 11 wherein the decoding step is performed by the N component decoders comprising circular MAP decoders, the decoding step comprising solving an eigenvector problem.
18. The method of claim 11 wherein the decoding step is performed by the N component decoders comprising circular MAP decoders, the decoding step comprising a recursion method.
19. The method of claim 11 wherein the formatting step further comprises puncturing selected ones of the bits from the component codewords which comprise the composite codeword according to a predetermined pattern, the method for decoding further comprising the step of inserting neutral values for all punctured bits when forming the received component codewords.
20. A parallel concatenated encoder, comprising:
a plurality (N) of component encoders and a plurality (N-1) of interleavers connected in a parallel concatenation for systematically applying tail-biting nonrecursive systematic convolutional codes to a block of data bits and various permutations of the block of data bits, and thereby generating component codewords comprising the data bits and parity bits ; and a composite codeword formatter for formatting the collection of bits from the component codewords to provide a composite codeword.
a plurality (N) of component encoders and a plurality (N-1) of interleavers connected in a parallel concatenation for systematically applying tail-biting nonrecursive systematic convolutional codes to a block of data bits and various permutations of the block of data bits, and thereby generating component codewords comprising the data bits and parity bits ; and a composite codeword formatter for formatting the collection of bits from the component codewords to provide a composite codeword.
21. The encoder of claim 20 wherein the composite codeword formatter produces the composite codeword such that it includes only one occurrence of each bit in the block of data bits.
22. The encoder of claim 20 wherein the composite codeword produces the composite codeword such that it includes only selected ones of the bits comprising the component codewords according to a predetermined pattern.
23. A composite decoder for decoding parallel concatenated convolutional codes, comprising:
a composite-codeword-to-component-codeword converter for receiving a composite codeword from a channel, the composite codeword comprising selected bits of a plurality of N component codewords that have been generated by applying tail-biting nonrecursive convolutional codes to a block of data bits in a parallel concatenated encoder, and forming a plurality of N corresponding received component codewords therefrom;
a plurality (N) of component decoders, each respective decoder receiving a corresponding received component codeword from the composite-codeword-to-component-codeword converter, each respective decoder also receiving a set of a priori soft-decision information for values of the data bits, the N component decoders each providing soft-decision information on each data bit in the data block in the order encoded by a corresponding component encoder in the parallel concatenated encoder;
a plurality of N-1 interleavers, each respective interleaver interleaving the soft-decision information from a corresponding component decoder to provide a permuted block of soft information to a subsequent component decoder, the received codewords being decoded by a process of iterations through the N component decoders and N-1 interleavers to provide soft-decision output from the composite decoder;
a first deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the first deinterleaver being applied in reverse order to the N-1 interleavers, the set of a priori soft-decision information for the first of the N component decoders being calculated assuming that the data bits' values are equally likely for the first iteration and thereafter comprising a first function of the soft-decision information, which first function of the soft-decision information is outputted by the N th decoder and fed back via the first deinterleaver, the set of a priori soft-decision information provided to each other component decoder comprising the first function of the soft-decision information from the previous sequential component decoder; and a second deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the second deinterleaver being applied in reverse order to the N-1 interleavers, the second deinterleaver deinterleaving a second function of the soft-decision output from the N th component decoder to provide the composite decoder's soft-decision output.
a composite-codeword-to-component-codeword converter for receiving a composite codeword from a channel, the composite codeword comprising selected bits of a plurality of N component codewords that have been generated by applying tail-biting nonrecursive convolutional codes to a block of data bits in a parallel concatenated encoder, and forming a plurality of N corresponding received component codewords therefrom;
a plurality (N) of component decoders, each respective decoder receiving a corresponding received component codeword from the composite-codeword-to-component-codeword converter, each respective decoder also receiving a set of a priori soft-decision information for values of the data bits, the N component decoders each providing soft-decision information on each data bit in the data block in the order encoded by a corresponding component encoder in the parallel concatenated encoder;
a plurality of N-1 interleavers, each respective interleaver interleaving the soft-decision information from a corresponding component decoder to provide a permuted block of soft information to a subsequent component decoder, the received codewords being decoded by a process of iterations through the N component decoders and N-1 interleavers to provide soft-decision output from the composite decoder;
a first deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the first deinterleaver being applied in reverse order to the N-1 interleavers, the set of a priori soft-decision information for the first of the N component decoders being calculated assuming that the data bits' values are equally likely for the first iteration and thereafter comprising a first function of the soft-decision information, which first function of the soft-decision information is outputted by the N th decoder and fed back via the first deinterleaver, the set of a priori soft-decision information provided to each other component decoder comprising the first function of the soft-decision information from the previous sequential component decoder; and a second deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the second deinterleaver being applied in reverse order to the N-1 interleavers, the second deinterleaver deinterleaving a second function of the soft-decision output from the N th component decoder to provide the composite decoder's soft-decision output.
24. The decoder of claim 23 wherein the number of iterations through the component decoders, interleavers and deinterleavers is a predetermined number.
25. The decoder of claim 23 wherein the iterations through the component decoders, interleavers and deinterleavers continues until decoder convergence is detected if the number of iterations is less than a maximum number; otherwise decoding terminates after the maximum number of iterations, and the composite decoder provides the second function of soft-decision outputs from the N th component decoder as its soft-decision output via the second deinterleaver.
26. The decoder of claim 23, further comprising a decision device for implementing a decision rule to provide hard-decision outputs as a function of the composite decoder's soft-decision output.
27. The method of claim 23 wherein the N component decoders comprise circular MAP decoders which decode by solving an eigenvector problem.
28. The method of claim 23 wherein the N component decoders comprise circular MAP decoders which decode by using a recursion method.
29. An encoder and decoder system for encoding and decoding parallel concatenated convolutional codes, comprising:
a parallel concatenated encoder comprising a plurality (N) of component encoders and a plurality (N-1) of encoder interleavers connected in a parallel concatenation for systematically applying tail-biting nonrecursive systematic convolutional codes to a block of data bits and various permutations of the block of data bits, and thereby generating component codewords comprising the data bits and parity bits; and a composite codeword formatter for formatting the collection of bits from the component codewords to provide a composite codeword;
a composite-codeword-to-component-codeword converter for receiving the composite codeword from a channel and forming a plurality of N
corresponding received component codewords therefrom;
a plurality (N) of component decoders, each respective decoder receiving a corresponding received component codeword from the composite-codeword-to-component-codeword converter, each respective decoder also receiving a set of a priori soft-decision information for values of the data bits, the N component decoders each providing soft-decision information on each data bit in the data block in the order encoded by a corresponding component encoder in the parallel concatenated encoder;
a plurality of N-1 interleavers, each respective interleaver interleaving the soft-decision information from a corresponding component decoder to provide a permuted block of soft information to a subsequent component decoder, the received codewords being decoded by a process of iterations through the N component decoders and N-1 interleavers to provide soft-decision output from the composite decoder;
a first deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the first deinterleaver being applied in reverse order to the N-1 interleavers, the set of a priori soft-decision information for the first of the N component decoders being calculated assuming that the data bits' values are equally likely for the first iteration and thereafter comprising a first function of the soft-decision information, which first function of the soft-decision information is outputted by the N th decoder and fed back via the first deinterleaver, the set of a priori soft-decision information provided to each other component decoder comprising the first function of the soft-decision information from the previous sequential component decoder; and a second deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the second deinterleaver being applied in reverse order to the N-1 interleavers, the second deinterleaver deinterleaving a second function of the soft-decision output from the N th component decoder to provide the composite decoder's soft-decision output.
a parallel concatenated encoder comprising a plurality (N) of component encoders and a plurality (N-1) of encoder interleavers connected in a parallel concatenation for systematically applying tail-biting nonrecursive systematic convolutional codes to a block of data bits and various permutations of the block of data bits, and thereby generating component codewords comprising the data bits and parity bits; and a composite codeword formatter for formatting the collection of bits from the component codewords to provide a composite codeword;
a composite-codeword-to-component-codeword converter for receiving the composite codeword from a channel and forming a plurality of N
corresponding received component codewords therefrom;
a plurality (N) of component decoders, each respective decoder receiving a corresponding received component codeword from the composite-codeword-to-component-codeword converter, each respective decoder also receiving a set of a priori soft-decision information for values of the data bits, the N component decoders each providing soft-decision information on each data bit in the data block in the order encoded by a corresponding component encoder in the parallel concatenated encoder;
a plurality of N-1 interleavers, each respective interleaver interleaving the soft-decision information from a corresponding component decoder to provide a permuted block of soft information to a subsequent component decoder, the received codewords being decoded by a process of iterations through the N component decoders and N-1 interleavers to provide soft-decision output from the composite decoder;
a first deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the first deinterleaver being applied in reverse order to the N-1 interleavers, the set of a priori soft-decision information for the first of the N component decoders being calculated assuming that the data bits' values are equally likely for the first iteration and thereafter comprising a first function of the soft-decision information, which first function of the soft-decision information is outputted by the N th decoder and fed back via the first deinterleaver, the set of a priori soft-decision information provided to each other component decoder comprising the first function of the soft-decision information from the previous sequential component decoder; and a second deinterleaver comprising N-1 deinterleavers corresponding to the N-1 interleavers, the N-1 deinterleavers of the second deinterleaver being applied in reverse order to the N-1 interleavers, the second deinterleaver deinterleaving a second function of the soft-decision output from the N th component decoder to provide the composite decoder's soft-decision output.
30. The encoder and decoder system of claim 29 wherein the composite codeword formatter produces the composite codeword such that it includes only one occurrence of each bit in the block of data bits.
31. The encoder and decoder system of claim 29 wherein the composite codeword produces the composite codeword such that it includes only selected ones of the bits comprising the component codewords according to a predetermined pattern.
32. The encoder and decoder system of claim 29 wherein the number of iterations through the component decoders, interleavers and deinterleavers is a predetermined number.
33. The encoder and decoder system of claim 29 wherein the iterations through the component decoders, interleavers and deinterleavers continues until decoder convergence is detected if the number of iterations is less than a maximum number; otherwise decoding terminates after the maximum number of iterations, and the composite decoder provides the second function of soft-decision outputs from the N th component decoder as its soft-decision output via the second deinterleaver.
34. The encoder and decoder system of claim 29, further comprising a decision device for implementing a decision rule to provide hard-decision outputs as a function of the decoder's soft-decision output.
35. The method of claim 29 wherein the N component decoders comprise circular MAP decoders which decode by solving an eigenvector problem.
36. The method of claim 29 wherein the N component decoders comprise circular MAP decoders which decode by using a recursion method.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/636,732 US5721745A (en) | 1996-04-19 | 1996-04-19 | Parallel concatenated tail-biting convolutional code and decoder therefor |
US08/636,732 | 1996-04-19 | ||
PCT/US1997/006129 WO1997040582A1 (en) | 1996-04-19 | 1997-04-14 | Parallel concatenated tail-biting convolutional code and decoder therefor |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2221295A1 CA2221295A1 (en) | 1997-10-30 |
CA2221295C true CA2221295C (en) | 2005-03-22 |
Family
ID=24553103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002221295A Expired - Fee Related CA2221295C (en) | 1996-04-19 | 1997-04-14 | Parallel concatenated tail-biting convolutional code and decoder therefor |
Country Status (21)
Country | Link |
---|---|
US (1) | US5721745A (en) |
EP (1) | EP0834222B1 (en) |
JP (1) | JP3857320B2 (en) |
KR (1) | KR100522263B1 (en) |
CN (1) | CN1111962C (en) |
AR (1) | AR006767A1 (en) |
AU (1) | AU716645B2 (en) |
BR (1) | BR9702156A (en) |
CA (1) | CA2221295C (en) |
CZ (1) | CZ296885B6 (en) |
DE (1) | DE69736881T2 (en) |
HU (1) | HU220815B1 (en) |
ID (1) | ID16464A (en) |
IL (1) | IL122525A0 (en) |
MY (1) | MY113013A (en) |
NO (1) | NO975966L (en) |
PL (3) | PL183537B1 (en) |
RU (1) | RU2187196C2 (en) |
UA (1) | UA44779C2 (en) |
WO (1) | WO1997040582A1 (en) |
ZA (1) | ZA973217B (en) |
Families Citing this family (174)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FI100565B (en) * | 1996-01-12 | 1997-12-31 | Nokia Mobile Phones Ltd | Communication method and apparatus for encoding a signal |
US6023783A (en) * | 1996-05-15 | 2000-02-08 | California Institute Of Technology | Hybrid concatenated codes and iterative decoding |
KR100498752B1 (en) * | 1996-09-02 | 2005-11-08 | 소니 가부시끼 가이샤 | Apparatus and method for receiving data using bit metrics |
US5996113A (en) * | 1996-11-26 | 1999-11-30 | Intel Corporation | Method and apparatus for generating digital checksum signatures for alteration detection and version confirmation |
US6377610B1 (en) * | 1997-04-25 | 2002-04-23 | Deutsche Telekom Ag | Decoding method and decoding device for a CDMA transmission system for demodulating a received signal available in serial code concatenation |
US6490243B1 (en) * | 1997-06-19 | 2002-12-03 | Kabushiki Kaisha Toshiba | Information data multiplex transmission system, its multiplexer and demultiplexer and error correction encoder and decoder |
US5983384A (en) * | 1997-04-21 | 1999-11-09 | General Electric Company | Turbo-coding with staged data transmission and processing |
US6029264A (en) * | 1997-04-28 | 2000-02-22 | The Trustees Of Princeton University | System and method for error correcting a received data stream in a concatenated system |
EP0983637B1 (en) * | 1997-04-30 | 2001-09-26 | Siemens Aktiengesellschaft | Method and device for determining at least one digital signal value from an electric signal |
KR19990003242A (en) | 1997-06-25 | 1999-01-15 | 윤종용 | Structural Punched Convolutional Codes and Decoders |
DE69841631D1 (en) * | 1997-07-30 | 2010-06-02 | Samsung Electronics Co Ltd | Method and apparatus for adaptive channel coding |
KR19990012821A (en) | 1997-07-31 | 1999-02-25 | 홍성용 | Electromagnetic wave absorber composition and its manufacturing method, electromagnetic wave absorbing coating composition, its manufacturing method and its coating method |
US6192503B1 (en) * | 1997-08-14 | 2001-02-20 | Ericsson Inc. | Communications system and methods employing selective recursive decording |
JP4033245B2 (en) * | 1997-09-02 | 2008-01-16 | ソニー株式会社 | Turbo coding apparatus and turbo coding method |
US6138260A (en) * | 1997-09-04 | 2000-10-24 | Conexant Systems, Inc. | Retransmission packet capture system within a wireless multiservice communications environment with turbo decoding |
KR100248396B1 (en) * | 1997-10-24 | 2000-03-15 | 정선종 | Designed method of channel encoder by using parallel convolutional encoder |
US6000054A (en) * | 1997-11-03 | 1999-12-07 | Motorola, Inc. | Method and apparatus for encoding and decoding binary information using restricted coded modulation and parallel concatenated convolution codes |
EP0952673B1 (en) * | 1997-11-10 | 2017-05-17 | Ntt Mobile Communications Network Inc. | Interleaving method, interleaving apparatus, and recording medium in which interleave pattern generating program is recorded |
FR2771228A1 (en) * | 1997-11-18 | 1999-05-21 | Philips Electronics Nv | DIGITAL TRANSMISSION SYSTEM, DECODER, AND DECODING METHOD |
US6256764B1 (en) * | 1997-11-26 | 2001-07-03 | Nortel Networks Limited | Method and system for decoding tailbiting convolution codes |
CN1161886C (en) * | 1997-12-24 | 2004-08-11 | 英马尔塞特有限公司 | Coding method and apparatus |
US6088387A (en) * | 1997-12-31 | 2000-07-11 | At&T Corp. | Multi-channel parallel/serial concatenated convolutional codes and trellis coded modulation encoder/decoder |
US7536624B2 (en) | 2002-01-03 | 2009-05-19 | The Directv Group, Inc. | Sets of rate-compatible universal turbo codes nearly optimized over various rates and interleaver sizes |
US6370669B1 (en) * | 1998-01-23 | 2002-04-09 | Hughes Electronics Corporation | Sets of rate-compatible universal turbo codes nearly optimized over various rates and interleaver sizes |
US6430722B1 (en) * | 1998-01-23 | 2002-08-06 | Hughes Electronics Corporation | Forward error correction scheme for data channels using universal turbo codes |
US6275538B1 (en) * | 1998-03-11 | 2001-08-14 | Ericsson Inc. | Technique for finding a starting state for a convolutional feedback encoder |
US6452985B1 (en) * | 1998-03-18 | 2002-09-17 | Sony Corporation | Viterbi decoding apparatus and Viterbi decoding method |
WO1999050963A2 (en) | 1998-03-31 | 1999-10-07 | Samsung Electronics Co., Ltd. | TURBO ENCODING/DECODING DEVICE AND METHOD FOR PROCESSING FRAME DATA ACCORDING TO QoS |
KR100557177B1 (en) * | 1998-04-04 | 2006-07-21 | 삼성전자주식회사 | Adaptive Channel Code / Decoding Method and Its Code / Decoding Device |
RU2184419C2 (en) * | 1998-04-18 | 2002-06-27 | Самсунг Электроникс Ко., Лтд. | Channel coding device and method |
US6198775B1 (en) * | 1998-04-28 | 2001-03-06 | Ericsson Inc. | Transmit diversity method, systems, and terminals using scramble coding |
CN100466483C (en) * | 1998-06-05 | 2009-03-04 | 三星电子株式会社 | Transmitter and method for rate matching |
US6298463B1 (en) * | 1998-07-31 | 2001-10-02 | Nortel Networks Limited | Parallel concatenated convolutional coding |
KR100373965B1 (en) | 1998-08-17 | 2003-02-26 | 휴우즈 일렉트로닉스 코오포레이션 | Turbo code interleaver with near optimal performance |
JP2000068862A (en) * | 1998-08-19 | 2000-03-03 | Fujitsu Ltd | Error correction coder |
US6192501B1 (en) | 1998-08-20 | 2001-02-20 | General Electric Company | High data rate maximum a posteriori decoder for segmented trellis code words |
US6128765A (en) * | 1998-08-20 | 2000-10-03 | General Electric Company | Maximum A posterior estimator with fast sigma calculator |
US6263467B1 (en) | 1998-08-20 | 2001-07-17 | General Electric Company | Turbo code decoder with modified systematic symbol transition probabilities |
US6223319B1 (en) | 1998-08-20 | 2001-04-24 | General Electric Company | Turbo code decoder with controlled probability estimate feedback |
US6332209B1 (en) * | 1998-08-27 | 2001-12-18 | Hughes Electronics Corporation | Method for a general turbo code trellis termination |
KR100377939B1 (en) * | 1998-09-01 | 2003-06-12 | 삼성전자주식회사 | Frame Composition Device and Method for Subframe Transmission in Mobile Communication System |
ATE270795T1 (en) | 1998-09-28 | 2004-07-15 | Comtech Telecomm Corp | TURBO PRODUCT CODE DECODER |
US6427214B1 (en) | 1998-09-29 | 2002-07-30 | Nortel Networks Limited | Interleaver using co-set partitioning |
US6028897A (en) * | 1998-10-22 | 2000-02-22 | The Aerospace Corporation | Error-floor mitigating turbo code communication method |
US6044116A (en) * | 1998-10-29 | 2000-03-28 | The Aerospace Corporation | Error-floor mitigated and repetitive turbo coding communication system |
US6014411A (en) * | 1998-10-29 | 2000-01-11 | The Aerospace Corporation | Repetitive turbo coding communication method |
KR100277764B1 (en) * | 1998-12-10 | 2001-01-15 | 윤종용 | Encoder and decoder comprising serial concatenation structre in communication system |
US6202189B1 (en) * | 1998-12-17 | 2001-03-13 | Teledesic Llc | Punctured serial concatenated convolutional coding system and method for low-earth-orbit satellite data communication |
KR100346170B1 (en) * | 1998-12-21 | 2002-11-30 | 삼성전자 주식회사 | Interleaving / deinterleaving apparatus and method of communication system |
US6484283B2 (en) * | 1998-12-30 | 2002-11-19 | International Business Machines Corporation | Method and apparatus for encoding and decoding a turbo code in an integrated modem system |
KR100315708B1 (en) * | 1998-12-31 | 2002-02-28 | 윤종용 | Apparatus and method for puncturing a turbo encoder in a mobile communication system |
KR100296028B1 (en) * | 1998-12-31 | 2001-09-06 | 윤종용 | Decoder with Gain Control Device in Mobile Communication System |
US6088405A (en) * | 1999-01-15 | 2000-07-11 | Lockheed Martin Corporation | Optimal decoder for tall-biting convolutional codes |
US6665357B1 (en) * | 1999-01-22 | 2003-12-16 | Sharp Laboratories Of America, Inc. | Soft-output turbo code decoder and optimized decoding method |
US6304995B1 (en) * | 1999-01-26 | 2001-10-16 | Trw Inc. | Pipelined architecture to decode parallel and serial concatenated codes |
FR2789824B1 (en) | 1999-02-12 | 2001-05-11 | Canon Kk | METHOD FOR CORRECTING RESIDUAL ERRORS AT THE OUTPUT OF A TURBO-DECODER |
EP1030457B1 (en) * | 1999-02-18 | 2012-08-08 | Imec | Methods and system architectures for turbo decoding |
US6678843B2 (en) * | 1999-02-18 | 2004-01-13 | Interuniversitair Microelektronics Centrum (Imec) | Method and apparatus for interleaving, deinterleaving and combined interleaving-deinterleaving |
US6499128B1 (en) | 1999-02-18 | 2002-12-24 | Cisco Technology, Inc. | Iterated soft-decision decoding of block codes |
WO2000052832A1 (en) * | 1999-03-01 | 2000-09-08 | Fujitsu Limited | Turbo-decoding device |
FR2790621B1 (en) | 1999-03-05 | 2001-12-21 | Canon Kk | INTERLEAVING DEVICE AND METHOD FOR TURBOCODING AND TURBODECODING |
US6304996B1 (en) * | 1999-03-08 | 2001-10-16 | General Electric Company | High-speed turbo decoder |
US6754290B1 (en) * | 1999-03-31 | 2004-06-22 | Qualcomm Incorporated | Highly parallel map decoder |
US6594792B1 (en) | 1999-04-30 | 2003-07-15 | General Electric Company | Modular turbo decoder for expanded code word length |
US6715120B1 (en) | 1999-04-30 | 2004-03-30 | General Electric Company | Turbo decoder with modified input for increased code word length and data rate |
DE19924211A1 (en) * | 1999-05-27 | 2000-12-21 | Siemens Ag | Method and device for flexible channel coding |
US6473878B1 (en) * | 1999-05-28 | 2002-10-29 | Lucent Technologies Inc. | Serial-concatenated turbo codes |
JP3670520B2 (en) * | 1999-06-23 | 2005-07-13 | 富士通株式会社 | Turbo decoder and turbo decoder |
US6516136B1 (en) * | 1999-07-06 | 2003-02-04 | Agere Systems Inc. | Iterative decoding of concatenated codes for recording systems |
KR100421853B1 (en) * | 1999-11-01 | 2004-03-10 | 엘지전자 주식회사 | rate matching method in uplink |
JP3846527B2 (en) * | 1999-07-21 | 2006-11-15 | 三菱電機株式会社 | Turbo code error correction decoder, turbo code error correction decoding method, turbo code decoding apparatus, and turbo code decoding system |
US7031406B1 (en) * | 1999-08-09 | 2006-04-18 | Nortel Networks Limited | Information processing using a soft output Viterbi algorithm |
DE19946721A1 (en) * | 1999-09-29 | 2001-05-03 | Siemens Ag | Method and device for channel coding in a communication system |
US6226773B1 (en) * | 1999-10-20 | 2001-05-01 | At&T Corp. | Memory-minimized architecture for implementing map decoding |
DE69908366T2 (en) * | 1999-10-21 | 2003-12-04 | Sony Int Europe Gmbh | SOVA turbo decoder with lower normalization complexity |
US6580767B1 (en) * | 1999-10-22 | 2003-06-17 | Motorola, Inc. | Cache and caching method for conventional decoders |
ATE239328T1 (en) * | 1999-10-27 | 2003-05-15 | Infineon Technologies Ag | METHOD AND DEVICE FOR CODING A DOTTED TURBO CODE |
JP3549788B2 (en) * | 1999-11-05 | 2004-08-04 | 三菱電機株式会社 | Multi-stage encoding method, multi-stage decoding method, multi-stage encoding device, multi-stage decoding device, and information transmission system using these |
US6400290B1 (en) * | 1999-11-29 | 2002-06-04 | Altera Corporation | Normalization implementation for a logmap decoder |
AU4515801A (en) * | 1999-12-03 | 2001-06-18 | Broadcom Corporation | Viterbi slicer for turbo codes |
EP1254544B1 (en) * | 1999-12-03 | 2015-04-29 | Broadcom Corporation | Embedded training sequences for carrier acquisition and tracking |
DE10001147A1 (en) * | 2000-01-13 | 2001-07-19 | Siemens Ag | Data bit stream error protection method esp. for mobile phone networks - requires attaching given number of known tail-bits to end of respective data-blocks for protection of information-carrying bits in end-zones |
KR100374787B1 (en) * | 2000-01-18 | 2003-03-04 | 삼성전자주식회사 | Bandwidth-efficient concatenated trellis-coded modulation decoder and method thereof |
US7092457B1 (en) * | 2000-01-18 | 2006-08-15 | University Of Southern California | Adaptive iterative detection |
KR20070098913A (en) | 2000-01-20 | 2007-10-05 | 노오텔 네트웍스 리미티드 | Hybrid arq schemes with soft combining in variable rate packet data applications |
KR100331686B1 (en) * | 2000-01-26 | 2002-11-11 | 한국전자통신연구원 | Turbo decoder using base 2 log map algorithm |
US6810502B2 (en) | 2000-01-28 | 2004-10-26 | Conexant Systems, Inc. | Iteractive decoder employing multiple external code error checks to lower the error floor |
US6606724B1 (en) * | 2000-01-28 | 2003-08-12 | Conexant Systems, Inc. | Method and apparatus for decoding of a serially concatenated block and convolutional code |
US6516437B1 (en) | 2000-03-07 | 2003-02-04 | General Electric Company | Turbo decoder control for use with a programmable interleaver, variable block length, and multiple code rates |
US7356752B2 (en) * | 2000-03-14 | 2008-04-08 | Comtech Telecommunications Corp. | Enhanced turbo product codes |
JP2003534680A (en) * | 2000-04-04 | 2003-11-18 | コムテック テレコミュニケーションズ コーポレイション | Enhanced turbo product codec system |
US6606725B1 (en) | 2000-04-25 | 2003-08-12 | Mitsubishi Electric Research Laboratories, Inc. | MAP decoding for turbo codes by parallel matrix processing |
FR2808632B1 (en) * | 2000-05-03 | 2002-06-28 | Mitsubishi Electric Inf Tech | TURBO-DECODING PROCESS WITH RECONCODING MISTAKEN INFORMATION AND FEEDBACK |
US20020172292A1 (en) * | 2000-05-05 | 2002-11-21 | Gray Paul K. | Error floor turbo codes |
US6542559B1 (en) * | 2000-05-15 | 2003-04-01 | Qualcomm, Incorporated | Decoding method and apparatus |
CA2348941C (en) * | 2000-05-26 | 2008-03-18 | Stewart N. Crozier | Method and system for high-spread high-distance interleaving for turbo-codes |
US6738942B1 (en) * | 2000-06-02 | 2004-05-18 | Vitesse Semiconductor Corporation | Product code based forward error correction system |
FI109162B (en) * | 2000-06-30 | 2002-05-31 | Nokia Corp | Method and arrangement for decoding a convolution-coded codeword |
JP4543522B2 (en) * | 2000-08-31 | 2010-09-15 | ソニー株式会社 | Soft output decoding device, soft output decoding method, and decoding device and decoding method |
EP1364479B1 (en) * | 2000-09-01 | 2010-04-28 | Broadcom Corporation | Satellite receiver and corresponding method |
AU2001287101A1 (en) * | 2000-09-05 | 2002-03-22 | Broadcom Corporation | Quasi error free (qef) communication using turbo codes |
US7242726B2 (en) * | 2000-09-12 | 2007-07-10 | Broadcom Corporation | Parallel concatenated code with soft-in soft-out interactive turbo decoder |
US6604220B1 (en) * | 2000-09-28 | 2003-08-05 | Western Digital Technologies, Inc. | Disk drive comprising a multiple-input sequence detector selectively biased by bits of a decoded ECC codedword |
US6518892B2 (en) * | 2000-11-06 | 2003-02-11 | Broadcom Corporation | Stopping criteria for iterative decoding |
US20020104058A1 (en) * | 2000-12-06 | 2002-08-01 | Yigal Rappaport | Packet switched network having error correction capabilities of variable size data packets and a method thereof |
US7230978B2 (en) | 2000-12-29 | 2007-06-12 | Infineon Technologies Ag | Channel CODEC processor configurable for multiple wireless communications standards |
US6813742B2 (en) * | 2001-01-02 | 2004-11-02 | Icomm Technologies, Inc. | High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture |
FI20010147A (en) * | 2001-01-24 | 2002-07-25 | Nokia Corp | A method and arrangement for decoding a convolutionally coded codeword |
WO2002067429A2 (en) * | 2001-02-20 | 2002-08-29 | Cute Ltd. | System and method for enhanced error correction in trellis decoding |
FR2822316B1 (en) * | 2001-03-19 | 2003-05-02 | Mitsubishi Electric Inf Tech | OPTIMIZATION METHOD, UNDER RESOURCE CONSTRAINTS, OF THE SIZE OF ENCODED DATA BLOCKS |
JP4451008B2 (en) * | 2001-04-04 | 2010-04-14 | 三菱電機株式会社 | Error correction encoding method and decoding method and apparatus |
US6738948B2 (en) * | 2001-04-09 | 2004-05-18 | Motorola, Inc. | Iteration terminating using quality index criteria of turbo codes |
WO2002091592A1 (en) * | 2001-05-09 | 2002-11-14 | Comtech Telecommunications Corp. | Low density parity check codes and low density turbo product codes |
US7012911B2 (en) * | 2001-05-31 | 2006-03-14 | Qualcomm Inc. | Method and apparatus for W-CDMA modulation |
US20030123563A1 (en) * | 2001-07-11 | 2003-07-03 | Guangming Lu | Method and apparatus for turbo encoding and decoding |
CA2421427A1 (en) * | 2001-07-12 | 2003-01-23 | Min-Goo Kim | Reverse transmission apparatus and method for improving transmission throughput in a data communication system |
US6738370B2 (en) * | 2001-08-22 | 2004-05-18 | Nokia Corporation | Method and apparatus implementing retransmission in a communication system providing H-ARQ |
US7085969B2 (en) | 2001-08-27 | 2006-08-01 | Industrial Technology Research Institute | Encoding and decoding apparatus and method |
US6763493B2 (en) * | 2001-09-21 | 2004-07-13 | The Directv Group, Inc. | Method and system for performing decoding using a reduced-memory implementation |
FR2830384B1 (en) * | 2001-10-01 | 2003-12-19 | Cit Alcatel | DEVICE FOR CONVOLUTIVE ENCODING AND DECODING |
EP1317070A1 (en) * | 2001-12-03 | 2003-06-04 | Mitsubishi Electric Information Technology Centre Europe B.V. | Method for obtaining from a block turbo-code an error correcting code of desired parameters |
JP3637323B2 (en) * | 2002-03-19 | 2005-04-13 | 株式会社東芝 | Reception device, transmission / reception device, and reception method |
JP3549519B2 (en) * | 2002-04-26 | 2004-08-04 | 沖電気工業株式会社 | Soft output decoder |
US20030219513A1 (en) * | 2002-05-21 | 2003-11-27 | Roni Gordon | Personal nutrition control method |
US20050226970A1 (en) * | 2002-05-21 | 2005-10-13 | Centrition Ltd. | Personal nutrition control method and measuring devices |
JP3898574B2 (en) * | 2002-06-05 | 2007-03-28 | 富士通株式会社 | Turbo decoding method and turbo decoding apparatus |
KR100584170B1 (en) * | 2002-07-11 | 2006-06-02 | 재단법인서울대학교산학협력재단 | Turbo Coded Hybrid Automatic Repeat Request System And Error Detection Method |
US6774825B2 (en) * | 2002-09-25 | 2004-08-10 | Infineon Technologies Ag | Modulation coding based on an ECC interleave structure |
US7346833B2 (en) * | 2002-11-05 | 2008-03-18 | Analog Devices, Inc. | Reduced complexity turbo decoding scheme |
EP1592137A1 (en) | 2004-04-28 | 2005-11-02 | Samsung Electronics Co., Ltd. | Apparatus and method for coding/decoding block low density parity check code with variable block length |
CN100367676C (en) * | 2004-05-27 | 2008-02-06 | 中国科学院计算技术研究所 | Method and compressing circuits carried by high code rate convolutional codes |
WO2005119627A2 (en) * | 2004-06-01 | 2005-12-15 | Centrition Ltd. | Personal nutrition control devices |
US7346832B2 (en) | 2004-07-21 | 2008-03-18 | Qualcomm Incorporated | LDPC encoding methods and apparatus |
US7395490B2 (en) | 2004-07-21 | 2008-07-01 | Qualcomm Incorporated | LDPC decoding methods and apparatus |
KR101131323B1 (en) | 2004-11-30 | 2012-04-04 | 삼성전자주식회사 | Apparatus and method for channel interleaving in a wireless communication system |
US7373585B2 (en) * | 2005-01-14 | 2008-05-13 | Mitsubishi Electric Research Laboratories, Inc. | Combined-replica group-shuffled iterative decoding for error-correcting codes |
US7461328B2 (en) * | 2005-03-25 | 2008-12-02 | Teranetics, Inc. | Efficient decoding |
US7502982B2 (en) * | 2005-05-18 | 2009-03-10 | Seagate Technology Llc | Iterative detector with ECC in channel domain |
US7360147B2 (en) * | 2005-05-18 | 2008-04-15 | Seagate Technology Llc | Second stage SOVA detector |
US7395461B2 (en) | 2005-05-18 | 2008-07-01 | Seagate Technology Llc | Low complexity pseudo-random interleaver |
US8611305B2 (en) | 2005-08-22 | 2013-12-17 | Qualcomm Incorporated | Interference cancellation for wireless communications |
US8271848B2 (en) * | 2006-04-06 | 2012-09-18 | Alcatel Lucent | Method of decoding code blocks and system for concatenating code blocks |
US20080092018A1 (en) * | 2006-09-28 | 2008-04-17 | Broadcom Corporation, A California Corporation | Tail-biting turbo code for arbitrary number of information bits |
US7831894B2 (en) * | 2006-10-10 | 2010-11-09 | Broadcom Corporation | Address generation for contention-free memory mappings of turbo codes with ARP (almost regular permutation) interleaves |
US7827473B2 (en) * | 2006-10-10 | 2010-11-02 | Broadcom Corporation | Turbo decoder employing ARP (almost regular permutation) interleave and arbitrary number of decoding processors |
US8392811B2 (en) * | 2008-01-07 | 2013-03-05 | Qualcomm Incorporated | Methods and systems for a-priori decoding based on MAP messages |
TWI374613B (en) | 2008-02-29 | 2012-10-11 | Ind Tech Res Inst | Method and apparatus of pre-encoding and pre-decoding |
EP2096884A1 (en) | 2008-02-29 | 2009-09-02 | Koninklijke KPN N.V. | Telecommunications network and method for time-based network access |
US8250448B1 (en) * | 2008-03-26 | 2012-08-21 | Xilinx, Inc. | Method of and apparatus for implementing a decoder |
US8719670B1 (en) * | 2008-05-07 | 2014-05-06 | Sk Hynix Memory Solutions Inc. | Coding architecture for multi-level NAND flash memory with stuck cells |
US8995417B2 (en) | 2008-06-09 | 2015-03-31 | Qualcomm Incorporated | Increasing capacity in wireless communication |
US9237515B2 (en) | 2008-08-01 | 2016-01-12 | Qualcomm Incorporated | Successive detection and cancellation for cell pilot detection |
US9277487B2 (en) | 2008-08-01 | 2016-03-01 | Qualcomm Incorporated | Cell detection with interference cancellation |
EP2223431A4 (en) * | 2008-08-15 | 2010-09-01 | Lsi Corp | Ram list-decoding of near codewords |
CN102077173B (en) | 2009-04-21 | 2015-06-24 | 艾格瑞系统有限责任公司 | Error-floor mitigation of codes using write verification |
US9160577B2 (en) | 2009-04-30 | 2015-10-13 | Qualcomm Incorporated | Hybrid SAIC receiver |
JP6091895B2 (en) * | 2009-11-27 | 2017-03-08 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | Increased capacity in wireless communications |
ES2708959T3 (en) | 2009-11-27 | 2019-04-12 | Qualcomm Inc | Greater capacity in wireless communications |
PL2524371T3 (en) * | 2010-01-12 | 2017-06-30 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using a hash table describing both significant state values and interval boundaries |
US8448033B2 (en) * | 2010-01-14 | 2013-05-21 | Mediatek Inc. | Interleaving/de-interleaving method, soft-in/soft-out decoding method and error correction code encoder and decoder utilizing the same |
US8464142B2 (en) | 2010-04-23 | 2013-06-11 | Lsi Corporation | Error-correction decoder employing extrinsic message averaging |
US8499226B2 (en) * | 2010-06-29 | 2013-07-30 | Lsi Corporation | Multi-mode layered decoding |
US8458555B2 (en) | 2010-06-30 | 2013-06-04 | Lsi Corporation | Breaking trapping sets using targeted bit adjustment |
US8504900B2 (en) | 2010-07-02 | 2013-08-06 | Lsi Corporation | On-line discovery and filtering of trapping sets |
US8769365B2 (en) | 2010-10-08 | 2014-07-01 | Blackberry Limited | Message rearrangement for improved wireless code performance |
CN107276717B (en) * | 2010-10-08 | 2020-06-26 | 黑莓有限公司 | Message rearrangement for improved code performance |
CN102412849A (en) * | 2011-09-26 | 2012-04-11 | 中兴通讯股份有限公司 | Coding method and coding device for convolutional code |
US9043667B2 (en) | 2011-11-04 | 2015-05-26 | Blackberry Limited | Method and system for up-link HARQ-ACK and CSI transmission |
US8768990B2 (en) | 2011-11-11 | 2014-07-01 | Lsi Corporation | Reconfigurable cyclic shifter arrangement |
US10178651B2 (en) | 2012-05-11 | 2019-01-08 | Blackberry Limited | Method and system for uplink HARQ and CSI multiplexing for carrier aggregation |
US20130326630A1 (en) * | 2012-06-01 | 2013-12-05 | Whisper Communications, LLC | Pre-processor for physical layer security |
US9053047B2 (en) * | 2012-08-27 | 2015-06-09 | Apple Inc. | Parameter estimation using partial ECC decoding |
RU2012146685A (en) | 2012-11-01 | 2014-05-10 | ЭлЭсАй Корпорейшн | DATABASE DETAILS DATABASE FOR DECODER BASED ON SPARED PARITY CONTROL |
US9432053B1 (en) * | 2014-07-07 | 2016-08-30 | Microsemi Storage Solutions (U.S.), Inc. | High speed LDPC decoder |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2675968B1 (en) * | 1991-04-23 | 1994-02-04 | France Telecom | METHOD FOR DECODING A CONVOLUTIVE CODE WITH MAXIMUM LIKELIHOOD AND WEIGHTING OF DECISIONS, AND CORRESPONDING DECODER. |
FR2675971B1 (en) * | 1991-04-23 | 1993-08-06 | France Telecom | CORRECTIVE ERROR CODING METHOD WITH AT LEAST TWO SYSTEMIC CONVOLUTIVE CODES IN PARALLEL, ITERATIVE DECODING METHOD, CORRESPONDING DECODING MODULE AND DECODER. |
US5349589A (en) * | 1991-07-01 | 1994-09-20 | Ericsson Ge Mobile Communications Inc. | Generalized viterbi algorithm with tail-biting |
US5369671A (en) * | 1992-05-20 | 1994-11-29 | Hughes Aircraft Company | System and method for decoding tail-biting code especially applicable to digital cellular base stations and mobile units |
US5355376A (en) * | 1993-02-11 | 1994-10-11 | At&T Bell Laboratories | Circular viterbi decoder |
US5577053A (en) * | 1994-09-14 | 1996-11-19 | Ericsson Inc. | Method and apparatus for decoder optimization |
-
1996
- 1996-04-19 US US08/636,732 patent/US5721745A/en not_active Expired - Lifetime
-
1997
- 1997-04-14 RU RU98100768/09A patent/RU2187196C2/en not_active IP Right Cessation
- 1997-04-14 AU AU24591/97A patent/AU716645B2/en not_active Ceased
- 1997-04-14 WO PCT/US1997/006129 patent/WO1997040582A1/en active IP Right Grant
- 1997-04-14 KR KR1019970709441A patent/KR100522263B1/en active IP Right Grant
- 1997-04-14 IL IL12252597A patent/IL122525A0/en not_active IP Right Cessation
- 1997-04-14 HU HU9901440A patent/HU220815B1/en not_active IP Right Cessation
- 1997-04-14 EP EP97920377A patent/EP0834222B1/en not_active Expired - Lifetime
- 1997-04-14 CN CN97190399A patent/CN1111962C/en not_active Expired - Fee Related
- 1997-04-14 UA UA97125953A patent/UA44779C2/en unknown
- 1997-04-14 PL PL97349516A patent/PL183537B1/en not_active IP Right Cessation
- 1997-04-14 PL PL97323524A patent/PL183239B1/en not_active IP Right Cessation
- 1997-04-14 JP JP53813797A patent/JP3857320B2/en not_active Expired - Fee Related
- 1997-04-14 CZ CZ0407397A patent/CZ296885B6/en not_active IP Right Cessation
- 1997-04-14 BR BR9702156A patent/BR9702156A/en not_active Application Discontinuation
- 1997-04-14 DE DE69736881T patent/DE69736881T2/en not_active Expired - Fee Related
- 1997-04-14 PL PL97349517A patent/PL184230B1/en not_active IP Right Cessation
- 1997-04-14 CA CA002221295A patent/CA2221295C/en not_active Expired - Fee Related
- 1997-04-15 ZA ZA9703217A patent/ZA973217B/en unknown
- 1997-04-17 MY MYPI97001695A patent/MY113013A/en unknown
- 1997-04-17 ID IDP971284A patent/ID16464A/en unknown
- 1997-04-21 AR ARP970101602A patent/AR006767A1/en unknown
- 1997-12-18 NO NO975966A patent/NO975966L/en not_active Application Discontinuation
Also Published As
Publication number | Publication date |
---|---|
WO1997040582A1 (en) | 1997-10-30 |
AU716645B2 (en) | 2000-03-02 |
DE69736881D1 (en) | 2006-12-14 |
NO975966D0 (en) | 1997-12-18 |
PL184230B1 (en) | 2002-09-30 |
JP3857320B2 (en) | 2006-12-13 |
HUP9901440A3 (en) | 2000-03-28 |
JPH11508439A (en) | 1999-07-21 |
PL183239B1 (en) | 2002-06-28 |
KR100522263B1 (en) | 2006-02-01 |
CN1189935A (en) | 1998-08-05 |
DE69736881T2 (en) | 2007-06-21 |
IL122525A0 (en) | 1998-06-15 |
CZ407397A3 (en) | 1998-06-17 |
CA2221295A1 (en) | 1997-10-30 |
ID16464A (en) | 1997-10-02 |
CZ296885B6 (en) | 2006-07-12 |
PL183537B1 (en) | 2002-06-28 |
HUP9901440A2 (en) | 1999-08-30 |
BR9702156A (en) | 1999-07-20 |
CN1111962C (en) | 2003-06-18 |
RU2187196C2 (en) | 2002-08-10 |
AR006767A1 (en) | 1999-09-29 |
EP0834222B1 (en) | 2006-11-02 |
PL323524A1 (en) | 1998-03-30 |
AU2459197A (en) | 1997-11-12 |
UA44779C2 (en) | 2002-03-15 |
ZA973217B (en) | 1997-12-18 |
NO975966L (en) | 1997-12-18 |
EP0834222A1 (en) | 1998-04-08 |
HU220815B1 (en) | 2002-05-28 |
MY113013A (en) | 2001-10-31 |
KR19990022971A (en) | 1999-03-25 |
US5721745A (en) | 1998-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2221295C (en) | Parallel concatenated tail-biting convolutional code and decoder therefor | |
US6289486B1 (en) | Adaptive channel encoding method and device | |
EP0997031B1 (en) | Adaptive channel encoding method and device | |
Divsalar et al. | Hybrid concatenated codes and iterative decoding | |
CA2234006C (en) | Encoding and decoding methods and apparatus | |
CA2317202A1 (en) | Channel decoder and method of channel decoding | |
US20020104054A1 (en) | Encoding method having improved interleaving | |
US20010010089A1 (en) | Digital transmission method of the error-correcting coding type | |
US6487694B1 (en) | Method and apparatus for turbo-code decoding a convolution encoded data frame using symbol-by-symbol traceback and HR-SOVA | |
EP1257064B1 (en) | Prunable S-random block interleaver method and corresponding interleaver | |
US6961894B2 (en) | Digital transmission method of the error-correcting coding type | |
EP1119915B1 (en) | Hybrid interleaver for turbo codes | |
Sklar | Turbo code concepts made easy, or how I learned to concatenate and reiterate | |
US20060048035A1 (en) | Method and apparatus for detecting a packet error in a wireless communications system with minimum overhead using embedded error detection capability of turbo code | |
Pehkonen et al. | A superorthogonal turbo-code for CDMA applications | |
Lin et al. | An efficient soft-input scaling scheme for turbo decoding | |
KR20070112326A (en) | High code rate turbo coding method for the high speed data transmission and therefore apparatus | |
Fanucci et al. | VLSI design of a high speed turbo decoder for 3rd generation satellite communication | |
Talakoub et al. | A linear Log-MAP algorithm for turbo decoding over AWGN channels | |
Dave et al. | Turbo block codes using modified Kaneko's algorithm | |
GB2409618A (en) | Telecommunications decoder device | |
JP2001326577A (en) | Device and method for directly connected convolutional encoding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed |