Channel Coding#

Reliable communication over a noisy channel is achievable when the transmission rate remains below the channel capacity, a fundamental limit defining the maximum rate for error-free data transfer.

This reliability is facilitated by channel coding, a process that assigns messages to specific blocks of channel inputs, selectively using only a subset of all possible blocks to enhance error resilience.

Specific mappings between messages and channel input sequences have not been explored in detail here, as the focus is on theoretical bounds rather than practical implementations.

Channel capacity \(C\) and channel cutoff rate \(R_0\) are evaluated using random coding, a method that avoids specifying an optimal mapping.

Instead, random coding averages the error probability across all possible mappings, showing that when the transmission rate is less than \(C\), the ensemble average error probability approaches zero as block length increases, due to the statistical likelihood of favorable mappings.

This implies the existence of at least one mapping where the error probability diminishes with longer blocks, providing a theoretical basis for practical code design.

Block Codes#

Channel codes are classified into two primary types: block codes and convolutional codes, each with distinct encoding strategies.

In block codes, one of \(M = 2^k\) messages—each a binary sequence of length \(k\) called the information sequence—is mapped to a binary sequence of length \(n\), known as the codeword, where \(n \geq k\) to allow redundancy for error correction.

Transmission occurs by sending \(n\) binary symbols over the channel, often using modulation like BPSK (Binary Phase Shift Keying) to represent bits as signal phases.

Block coding schemes are memoryless, meaning the encoding of each codeword is independent of prior transmissions.

After encoding and sending a codeword, the system processes a new set of \(k\) information bits, producing a codeword based solely on the current input, unaffected by previous codewords, which simplifies implementation but limits error correction across blocks.

Code Rate#

The code rate \(R_c\) of a block or convolutional code is defined as:

\[ R_c = \frac{k}{n} \]

This ratio quantifies the efficiency of information transfer, measuring information bits per transmitted symbol in units of information bits per transmission; since \(n > k\), \(R_c < 1\), reflecting added redundancy.

For a codeword of length \(n\) transmitted via an \(N\)-dimensional constellation of size \(M\) (a power of 2), \(L = \frac{n}{\log_2 M}\) symbols are sent per codeword, where \(L\) is an integer.

With symbol duration \(T_s\), the transmission time for \(k\) bits is \(T = LT_s\), and the rate becomes:

\[\begin{split} \begin{align} R &= \frac{k}{LT_s} = \frac{k}{n} \times \frac{\log_2 M}{T_s} \\ &= R_c \frac{\log_2 M}{T_s} \quad \text{bits/s} \end{align} \end{split}\]

This equation ties \(R_c\) to modulation complexity (\(\log_2 M\)) and symbol rate (\(1/T_s\)), showing how coding impacts overall data throughput.

Spectral Bit Rate#

The encoded and modulated signals span a space of dimension \(LN\).

Per the dimensionality theorem, the minimum transmission bandwidth is:

\[ W = \frac{N}{2T_s} = \frac{RN}{2R_c \log_2 M} \quad \text{bits/s} \]

The spectral bit rate, or bandwidth efficiency, is:

\[ r = \frac{R}{W} = \frac{2 \log_2 M}{N} R_c \]

Here, \(N\) is the signal space dimension per symbol, and \(\log_2 M\) is bits per symbol.

Compared to an uncoded system with identical modulation, coding reduces the bit rate by \(R_c\) (due to redundancy) and increases bandwidth by \(\frac{1}{R_c}\), as more symbols are needed per information bit.

This trade-off enhances reliability at the cost of spectral efficiency.

Energy and Transmitted Power#

Given an average constellation energy \(\mathcal{E}_{\text{avg}}\), the energy per codeword is:

\[ \mathcal{E} = L \mathcal{E}_{\text{avg}} = \frac{n}{\log_2 M} \mathcal{E}_{\text{avg}} \]

This accounts for \(L\) symbols per codeword, each carrying \(\mathcal{E}_{\text{avg}}\).

The energy per codeword component is:

\[ \mathcal{E}_c = \frac{\mathcal{E}}{n} = \frac{\mathcal{E}_{\text{avg}}}{\log_2 M} \]

This distributes energy across \(n\) components.

The energy per information bit is:

\[ \mathcal{E}_b = \frac{\mathcal{E}}{k} = \frac{\mathcal{E}_{\text{avg}}}{R_c \log_2 M} \]

Since \(k < n\), \(\mathcal{E}_b > \mathcal{E}_c\), reflecting redundancy’s energy cost per bit.

Combining these:

\[ \mathcal{E}_c = R_c \mathcal{E}_b \]

This shows \(\mathcal{E}_c\) scales with \(R_c\), linking code efficiency to energy allocation.

Transmitted Power#

Transmitted power is:

\[ P = \frac{\mathcal{E}}{LT_s} = \frac{\mathcal{E}_{\text{avg}}}{T_s} = R \frac{\mathcal{E}_{\text{avg}}}{R_c \log_2 M} = R \mathcal{E}_b \]

Power depends on bit rate \(R\) and energy per bit \(\mathcal{E}_b\), adjusted by coding and modulation.

Common schemes—BPSK, BFSK, and QPSK—yield:

  • BPSK: \(W = \frac{R}{R_c}, r = R_c\) (1 bit/symbol, \(N=1\))

  • BFSK: \(W = \frac{R}{R_c}, r = R_c\) (1 bit/symbol, \(N=1\), frequency-based)

  • QPSK: \(W = \frac{R}{2R_c}, r = 2R_c\) (2 bits/symbol, \(N=2\))

These reflect how modulation affects bandwidth (halved in QPSK due to higher bits/symbol) and efficiency, with coding consistently scaling both by \(R_c\).