Shannon's separation theorem states that source coding (compression) and channel coding (error protection) can be performed separately and sequentially, while maintaining optimality. However, this is true only in the case of asymptotically long block lengths of data. In many practical applications, the conditions of the Shannon's separation theorem neither hold, nor can be used as a good approximation. Thus, considerable interest has developed in various schemes of joint source-channel coding.
Of particular interest is coding scenarios for image compression and transmission that have a successive approximation property. These schemes are particularly useful in image browsing and web-based applications. It is well known that using progressive codes in the presence of channel errors leads to disastrous results, since the first error renders the remainder of the bitstream useless. Furthermore, the decoding of the -now erroneously interpreted- bitstream will add to the distortion of the decoded image. For example, the following image shows the compressed Lena image at 1 bits/pixel (bpp) that has been transferred through a binary symmetric channel (BSC) with a bit error rate probability of 0.0001 (equivalently 0.01%).
This experiment shows that even a benign channel can wreck havoc on a progressive bitstream, so some form of error protection (channel coding) is necessary. To use available bandwidth efficiently, we must somehow decide the portion the bandwidth to put aside for error protection. This is known as the rate allocation problem.
The rate allocation problem is key in developing high-performance end-to-end systems. We demonstrate this fact by the following argument.
Assuming a fixed overall channel rate, one must determine how much of the bitrate to dedicate to source bits, and how much to channel error protection. If too much rate is given to source bits, the channel errors will destroy the signal (as demonstrated by the figure above). On the other hand, if too little rate is given to source bits, the reconstruction quality will be poor due to over-quantization of the source data. This tradeoff can be observed by a simple experiment, shown in the figure below. In this experiment, we use the SPIHT progressive image compression, due to Said and Pearlman  and a rate-compatible punctured convolutional code (RCPC), at a fixed overall rate. The horizontal axis represents the rate tradeoff.
Click on image for full-size graph
This research has produced a low-complexity mechanism for the determination of rate allocation for source-channel coding of progressive sources. The system diagram is shown in the figure below. We have solved the problem for any family of channel codes, and any progressive image coder. In particular, unlike some other solutions of this problem, we do not require a rate-compatibility condition on the error control code. The rate control unit uses a Lagrangian formulation of the problem to devise an efficient and fast solution.
For memoryless channels, we have found closed-form expressions for the end-to-end signal to noise ratios and rate allocation. Fading channels do not admit closed form expressions, however, similar (near-optimal) algorithmic solutions have been found for time-varying rate allocation in fading channels. We have also analyzed optimal strategies in the presence of feedback, i.e., in hybrid ARQ channels. The results are elegant and lead to computationally feasible algorithms.
The formulation of the problem and its solution are available in references [2,3] below.
Acknowledgments:This research was made possible in part by the National Science Foundation through grant CCR-9985171.
Last modified February 2002
Back to MCL Main Page