Internet traffic traces show that a significant number of packets are only 40 bytes long, i.e. they have no payload, and a majority of packets are less than 300 bytes long . Thus, headers impose a heavy overhead in terms of bitrate. In another study it was found that over 55% of packets have length less than 200 bytes. Thankfully the header fields in successive packets are redundant, so it is possible to compress them. Many header compression algorithms have been proposed and studied over the past 15 years. These methods depend on the similarity between successive packet headers, in a manner similar to DPCM. Thus, packet header compression shares the strengths and weaknesses of DPCM: excellent compression is achieved when headers are strongly correlated (as they usually are), but any errors will propagate and contaminate future packets.
We propose to use error correcting codes [2-4] to limit error
propagation in header compression schemes. We present coding techniques
based on well known error correcting codes such as Reed-Solomon and
convolutional codes for the uni-directional links.
For the feedback case we construct a predictive ARQ with
multiple retransmission within a convolutional codeword. We also design
and use a delay-limited interleaver. Using this structure, very
attractive throughput-delay trade off is obtained in the presence of
noise in both forward and reverse links. In fact, both the throughput
and delay of the proposed system is superior to previous solutions under
a large set of channel conditions.
Basic Header Compression
Some header fields stay static throughout a session whereas some header fields can be inferred from header fields belonging to a previous packet. Other fields such as the checksum are almost random in the sense that they cannot be inferred easily from other available information. At the start of a session the transmitter sends an uncompressed header which initializes the decompressor. Static fields are not included in the compressed header whereas random fields are included verbatim. The remaining fields are differentially encoded or not transmitted at all (e.g., inferred fields).
A basic header compression system has a CONTEXT buffer on each side of the link as shown
in Figure 1. At the transmitter and receiver, the last available header (in uncompressed form) is kept in
the CONTEXT as a reference. At the transmitter each incoming packet is compressed with reference to the CONTEXT
. The CONTEXT is then updated with an uncompressed version of the most recently compressed packet header.
Similarly at the
receiver decompression of a packet header takes place with respect to CONTEXT at the receiver side. On an average a 40 byte
IPv4 (RTP/UDP or TCP) header can be compressed down to 3 bytes.
FEC Coding of Compressed Headers
There are two features of the header compression problem that give rise to burst of errors. First, each packet header, even when compressed, consists of multiple bytes, thus the loss of each compressed header will result in loss of 2-5 consecutive bytes. Second, the bursty nature of many channels of interest, e.g., the fading wireless channel, result in consecutive losses.
Motivated by the bursty nature of losses and by excellent burst-error
correcting capability of Reed-Solomon (RS) codes, a technique is
proposed to mitigate error propagation on uni-directional links. In a
similar vein, a reduced-complexity solution using an interleaved
convolutional code is proposed. For the feedback case, a new predictive
hybrid ARQ technique with convolutional codes is developed.
Case I: Uni-directional Links
As shown in Figure 2. the packet headers (one uncompressed and several compressed headers) are bundled
together and presented to the RS encoder. The parity symbols so generated are distributed equally among
the compressed headers (Figure 3.). This ensures that the system is resilient even if some packets are lost
and decoding is not severely affected.
Performance of the proposed RS based scheme vs. the uncoded scheme over
an iid channel is shown in Figure 4. Note that the packet discard rate is the one perceived at the application layer. Packets lost or dropped
at lower layers never reach the upper layers. As expected the decreasing the code rate gives better performance and overall the
coded scheme performs better than the uncoded case.
Convolutional Codes & Delay limited interleaving
Convolutional codes offer an interesting alternative in FEC coding. Encoding and decoding
them is less complex than block codes such as RS codes. Decoding RS codes is at
least quadratic in the codeword length
whereas, decoding convolutional via the Viterbi algorithm is only linear in complexity. Convolutional codes perform
well when errors are random. To improve the performance of convolutional codes when losses
are bursty, interleaving must be employed. As shown in Figure 5 we employ two interleavers, one for the data bits before encoding
and the other for parity bits. This randomizes the burst errors in the codeword
achieving better performance. The decoding operation is shown in Figure 6.
Interleaving always incurs delay. For delay-sensitive applications we design a delay-limited interleaver. The design details can be found
"here" . As shown in Figure 7 the proposed convolutional codes based system
performs well compared to ECRTP and ROHC in the uni-directional mode.
Case II: Bi-directional Links
ARQ mechanism in header compression is different from existing ARQ techniques. There are multiple
retransmissions within a codeword. To utilize this we propose a new predictive
hybrid ARQ technique based on convolutional codes. The block diagram of the system is shown in
Figure 8. We call our proposed scheme predictive
hybrid ARQ. The encoder and decoder block are shown in
Figure 5 and 6 respectively. Each time a packet
is lost, the decoder transmits a NACK to the encoder. Once a NACK is
received, the transmitter will mimic the decoding operation and will
determine if this packet loss will result in a decoding failure. If so,
it will retransmit the packet. Due to non-zero round-trip time, a buffer
is provided at the encoder to allow for potential retransmissions.
We compare performance of the proposed scheme with ROHC-O. We use the Recursive Systematic Convolutional
encoder given in  with rate 1/2 and K = 3. To test the performance of the proposed
scheme with various delay constraints, interleavers with delay equal to
100, 200, 400 and 900 bits were employed. From Figure 9 and 10 it is observed that the predictive hybrid ARQ method
performs favorably compared to ROHC-O under all channel conditions. Also note that the delay profile of the proposed scheme
remains consistent even when the channel deteriorates contrary to ROHC-O.
The discussion here pertains to non-TCP packet header compression. The IETF ROHC group is working towards a robust header compression solution for TCP protocols. The work is currently underway and expected to materialize in early 2006. Future work includes evaluating the performance of coded TCP header compression against the final draft on ROHC-TCP. Another extension of this work is to apply other advanced error correcting codes such as Turbo codes, LDPC codes and Raptor codes.
Acknowledgements:This work was supported in part via the grant 0627106 from the National Science Foundation.
Last modified 2008
Back to MCL Main Page