Soft Decision Decoding#

For linear binary block codes on an AWGN channel, soft decision decoding optimizes error probability using unquantized receiver outputs. With coherent PSK or orthogonal FSK (coherent or noncoherent), an optimal receiver employs \(M = 2^k\) matched filters, each tuned to a codeword waveform, selecting the codeword with the highest output. Alternatively, a single matched filter per bit, followed by \(M\) cross-correlators, computes decision variables, offering equivalent performance with different implementation complexity.

Soft Decision Signal Model#

For binary coherent PSK, the \(j\)-th matched filter output \(r_j\) of a codeword is: \begin{equation} r_j = \sqrt{E_c} + n_j \quad (\text{if bit } 1), \quad r_j = -\sqrt{E_c} + n_j \quad (\text{if bit } 0) \end{equation} where \(n_j\) is AWGN with zero mean and variance \(N_0/2\), modeling noise impact on the signal energy \(E_c\).

Correlation Metrics#

The decoder computes \(M\) correlation metrics:

\[ CM_m = C(\vec{r}, \vec{c}_m) = \sum_{j=1}^{n} (2c_{mj} - 1)r_j, \quad m = 1, 2, \ldots, M \]

Here, \(2c_{mj} - 1\) maps 1 to +1 and 0 to -1, aligning \(r_j\)’s signal component. The correct codeword’s metric averages \(n\sqrt{E_c}\), exceeding others, enabling optimal selection.

Block and Bit Error Probability in Soft Decision Decoding (SDD)#

The block error probability \(P_e\) for soft decision decoding (SDD) can be bounded using prior general bounds, adjusted for the specific modulation. For BPSK, the parameter \(\Delta\), defined earlier, is \(\Delta = e^{-E_c/N_0}\), where \(E_c = R_c E_b\) relates component energy to bit energy via the code rate \(R_c\). Substituting into the weight enumerating polynomial \(A(Z)\), the bound becomes: \begin{equation} P_e \leq (A(Z) - 1) \Big|_{Z = e^{-\frac{R_c E_b}{N_0}}} \end{equation} This leverages the code’s weight distribution to estimate error likelihood under AWGN with BPSK modulation.

Simplified Bounds and Bit Error Probability#

A simpler bound for \(P_e\) in SDD is: \begin{equation} P_e \leq (2^k - 1)e^{-R_c d_{\min} E_b / N_0} \end{equation} Using \(2^k - 1 < 2^k = e^{k \ln 2}\), this refines to: \begin{equation} P_e \leq e^{-\gamma_b (R_c d_{\min} - k \ln 2 / \gamma_b)} \end{equation} where \(\gamma_b = E_b / N_0\) is the SNR per bit, highlighting the exponential decay of error probability with SNR and code parameters. The bit error probability \(P_b\) for BPSK is bounded as: \begin{equation} P_b \leq \frac{1}{k} \left. \frac{\partial}{\partial Y} B(Y, Z) \right|_{Y=1, Z=\exp\left(-\frac{R_c E_b}{N_0}\right)} \end{equation} This uses the IOWEF \(B(Y, Z)\) to average bit errors, computable via tools like MATLAB’s bercoding.

Coding Gain#

Comparing the SDD bound \(e^{-\gamma_b (R_c d_{\min} - k \ln 2 / \gamma_b)}\) to uncoded BPSK’s bound \(\frac{1}{2}e^{-\gamma_b}\), coding offers a gain of approximately \(10 \log (R_c d_{\min} - k \ln 2 / \gamma_b)\) dB, termed the coding gain. This gain, dependent on \(R_c\), \(d_{\min}\), \(k\), and \(\gamma_b\), quantifies performance improvement. For high \(\gamma_b\), the asymptotic coding gain \(R_c d_{\min}\) emerges as the dominant term, representing the maximum achievable benefit of coding as noise diminishes.