Analysis of Finite-Alphabet Source-Channel Codes

Multimedia Communications Laboratory
University of Texas at Dallas


In this work [1] we consider the problem of the transmission of discrete, finite-alphabet sources over a noisy channel. Since efficient entropy codes are often variable length codes (VLC), a conventional channel decoding followed by a typical symbol-by-symbol entropy decoding will result in error propagation, thus a single uncorrected channel error may result in a long sequence of data errors.

Although it is possible to build some error resilience into variable length codes, this is often not strong enough to handle the error rates generated by most communication channels, thus a separate layer of channel coding is usually necessary (see Figure 1), which are used in conjunction with iterative decoding.

System Model
Figure 1. Variable-length source channel coding

Prior to our work [1], several investigations were made in this general direction, but neither a comprehensive analysis nor design criteria were available for iteratively decoded source-channel coding systems. In particular, it was not clear how much redudancy, if any, should be allowed in the entropy code, and how one should design the inner error correcting code. In this work, we analyze this concatenated system, study design criteria for the constituent codes, and present comparisons of various tradeoffs in the design of such codes, supported by extensive simulations.

Turbo Decoding with RVLC Codes

We employ the bit-level trellis that was proposed by Balakirsky [2], and later used by Bauer and Hagenauer [3], to represent the redundant variable length code (RVLC). This trellis is obtained simply by assigning the states of the trellis to the nodes of the VLC tree. Figure 2 shows the trellis corresponding to a Huffman code C1= {00,11,10,010,011}.

Trellis of a Source Code
Figure 2. Bit level trellises.


We have analyzed the performance of the concatenated system using the input-output weight enumerating function (IOWEF) technique. The details are available in [1]; here we only discuss the implications of the results.

The tecnhiques we used for the analysis are largely similar to those used to analyze serially concatenated convolutional codes (SCCC), with the key difference that our codes are nonlinear, so appropriate extensions were needed in the analysis. We were able to establish that that in the joint source-channel coding system with a simple (non-concatenated) channel code, the outer code must have a good minimum distance and the inner code must have a recursive structure for the overall code to perform well.

Our analysis also sheds light on the comparison between joint and separable source-channel coding. For example, we discovered that the joint source-channel coding scheme of Bauer and Hagenauer [3] is unnecessary and can be improved upon with a separable system of similar rate and complexity. Whenever the entropy code has small free distance (such as the RVLC used in [3]) one may be better off to design an iterative outer channel code instead of spending computational resources on iterative decoding between the source and channel codes. We also found that in several cases, even with outer codes having larger free distance, iterative source channel decoding may yield only a slight advantage compared with a separable source and channel coding (where in the latter case the channel code is a turbo code, such that the two overall systems are of comparable rate and complexity).

We note that our analysis and design rules are based on union bounds, which are useful in the medium to high signal-to-noise ratios, in particular above the cutoff rate. However, it has been observed and reported [4] that concatenated codes designed with union bound criteria perform well even in low-SNR regions. We have observed the same phenomenon for our codes.

The iterative decoding of serially concatenated codes requires a set of SISO modules for the channel code and the VLC, as shown in the Figure below.

Figure 3. Iterative decoding of the joint source-channel code.

Simulations and Comparisons

The comparisons between the joint and separable coding were performed (among others) on the system shown in the Figure below. The experiments were designed such that systems of equivalent rate and complexity are compared against each other. The results of the comparisons were mentioned in part in the paragraphs above.

Figure 4. Comparing the concatenated source-channel code with a source coding followed by a concatenated channel code.

For details, including the composition of the variable length codes as well as the generating polynomials of the channel codes used in simulations, the reader is referred to [1]. Below we show some of our error bounds against simulated values to show the accuracy of the analysis, simulation of several joint source-channel codes, as well as EXIT chart analysis.

Figure 5. Bounds vs. simulations

Figure 6. Simulation of several source-channel codes

Figure 7. EXIT chart analysis


This work was supported in part by a grant from the National Science Foundation.


  1. A. Hedayat and A. Nosratinia, ``Performance analysis and design criteria for finite-alphabet source/channel codes,'' IEEE Transactions on Communications, vol. 52, no. 11, November 2004, pp. 1872-1879.

  2. V. B. Balakirsky, ``Joint source-channel coding with variable length codes,'' in Proc. IEEE ISIT, June 1997, p. 419.

  3. R. Bauer and J. Hagenauer, ``On variable length codes for iterative source/channel decoding,'' in Proc. Data Compression Conference, April 2001, pp. 273.282.

  4. S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, ``Serial concatenation of interleaved codes: performance analysis, design, and iterative decoding,'' IEEE Transactions on Information Theory, vol. 44, no. 3, pp. 909 .926, May 1998.

Back to MCL Main Page