Optimal Receivers#

Waveform and Vector Channel Models#

In digital communication systems, the additive white Gaussian noise (AWGN) channel is a fundamental model that characterizes the effect of noise on the transmitted signal. In this chapter, we describe the AWGN channel model in both its continuous-time (waveform) form and its equivalent finite-dimensional vector form.

AWGN Channel: Waveform Model#

Consider a communication system in which a transmitter sends one of \( M \) possible signals through a channel that adds noise. The channel output is given by

\[ r(t) = s_m(t) + n(t) \]

where:

  • Transmitted Signal \( s_m(t) \):
    The signal \( s_m(t) \) is one of the \( M \) possible waveforms:

    \[ s_m(t) \in \{ s_1(t), s_2(t), \ldots, s_M(t) \} \]

    Each signal is assumed to have finite energy,

    \[ E_m = \int_{-\infty}^{\infty} s_m^2(t)\,dt < \infty \]

    and is typically designed to satisfy certain criteria (e.g., orthogonality or energy constraints) for efficient transmission.

  • Noise \( n(t) \):
    The noise process \( n(t) \) is modeled as a zero-mean white Gaussian noise process with the following properties:

    \[ \mathbb{E}[n(t)] = 0, \quad \text{and} \quad \mathbb{E}[n(t)n(\tau)] = \frac{N_0}{2}\delta(t-\tau) \]

    Here, \( \delta(t-\tau) \) is the Dirac delta function, and \( \frac{N_0}{2} \) represents the noise power spectral density (PSD), indicating that the noise is spectrally flat (white) across all frequencies.

  • Received Signal \( r(t) \):
    The waveform \( r(t) \) is the observed signal at the receiver, which is corrupted by the additive noise \( n(t) \).

Receiver Decision Rule#

Upon receiving \( r(t) \), the receiver’s task is to decide which signal \( s_m(t) \) was transmitted using a certain decision rule.

Example: Maximum Likelihood Decision Rule#

Assuming that the \( M \) messages are equally likely, the optimal receiver employs the maximum likelihood (ML) decision rule. The likelihood function for each hypothesis \( m \) is proportional to

\[ p\bigl(r(t) \mid s_m(t)\bigr) \propto \exp\left\{-\frac{1}{N_0}\int_{-\infty}^{\infty} \left[r(t) - s_m(t)\right]^2 dt \right\} \]

Maximizing this likelihood is equivalent to minimizing the Euclidean distance between the received waveform and the candidate signals. Thus, the decision rule, using continuos-time waveform, can be written as:

\[ \hat{m} = \arg \min_{1 \le m \le M} \int_{-\infty}^{\infty} \left[r(t) - s_m(t)\right]^2 dt \]

Vector Channel Model#

To facilitate analysis and implementation, it is often advantageous to represent the continuous-time signals in a finite-dimensional Euclidean space. This is accomplished by projecting the signals onto an orthonormal basis \(\{\phi_k(t)\}_{k=1}^N\).

Vectorization#

The projection of \( r(t) \), \( s_m(t) \), and \( n(t) \) onto a set of orthonormal basis functions is commonly referred to as the vectorization process of continuous waveforms. This process transforms continuous-time signals into finite-dimensional vectors.

Projection onto Basis Functions:

Given an orthonormal set of basis functions \(\{\phi_k(t)\}_{k=1}^{N}\), any finite-energy signal \( x(t) \) can be expressed as a linear combination of these basis functions:

\[ x(t) = \sum_{k=1}^{N} x_k \phi_k(t) \]

where the coefficients \( x_k \) are obtained by projecting \( x(t) \) onto the basis functions:

\[ x_k = \int_{-\infty}^{\infty} x(t)\phi_k(t)\,dt, \quad k = 1, 2, \ldots, N \]

Vector Representation:

Once the projection is complete, the signal \( x(t) \) is represented by the vector:

\[\begin{split} \vec{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_N \end{bmatrix} \end{split}\]

This procedure is applied similarly to the transmitted signal \( s_m(t) \), the noise \( n(t) \), and the received signal \( r(t) \).

Vectorized Communications Model#

For each signal, define the projection coefficients as follows:

\[ r_k = \int_{-\infty}^{\infty} r(t)\phi_k(t)\,dt, \quad s_{m,k} = \int_{-\infty}^{\infty} s_m(t)\phi_k(t)\,dt, \quad n_k = \int_{-\infty}^{\infty} n(t)\phi_k(t)\,dt \]

Thus, the waveform model \( r(t) = s_m(t) + n(t) \) can be equivalently expressed in vector form as:

\[ \vec{r} = \vec{s}_m + \vec{n} \]

where

\[\begin{split} \vec{r} = \begin{bmatrix} r_1 \\ r_2 \\ \vdots \\ r_N \end{bmatrix}, \quad \vec{s}_m = \begin{bmatrix} s_{m,1} \\ s_{m,2} \\ \vdots \\ s_{m,N} \end{bmatrix}, \quad \vec{n} = \begin{bmatrix} n_1 \\ n_2 \\ \vdots \\ n_N \end{bmatrix} \end{split}\]

Expansion of the Noise Process#

The noise process \(n(t)\) is modeled as a zero-mean white Gaussian process with power spectral density \(N_0/2\). A fundamental property of white Gaussian noise is that its projection onto any orthonormal basis yields independent and identically distributed (iid) Gaussian random variables. Specifically, when \(n(t)\) is expanded in the same basis,

\[ n(t) = \sum_{j=1}^N n_j \phi_j(t) \]

the coefficients

\[ n_j = \int_{-\infty}^{\infty} n(t) \phi_j(t) \, dt, \quad 1 \leq j \leq N \]

are iid with

\[ n_j \sim \mathcal{N}\left(0, \frac{N_0}{2}\right) \]

Thus, the noise process can be represented as the vector

\[ \vec{n} = [n_1, n_2, \dots, n_N]^T \]

Thus, we can say that from the noise process \( n(t) \), which is white and Gaussian, the projected noise components \( \{n_k\} \) are independent Gaussian random variables with:

\[ \boxed{ n_k \sim \mathcal{N}\left(0, \frac{N_0}{2}\right), \quad k = 1,2,\ldots,N } \]

Vector Form of the Channel#

The original waveform channel model is given by

\[ r(t) = s_m(t) + n(t) \]

By projecting \(r(t)\) onto the same set of basis functions, we obtain the vectorized representation of the received signal:

\[ \boxed{ \vec{r} = \vec{s}_m + \vec{n} } \]

where \(\vec{r}\), \(\vec{s}_m\), and \(\vec{n}\) are \(N\)-dimensional vectors. In this formulation, the noise vector \(\vec{n}\) has components that are iid zero-mean Gaussian random variables with variance \(N_0/2\).

Optimal Detection and Decision Rule#

The primary objectives at the receiver is to make an optimal decision regarding which message was transmitted. An optimal decision is defined in terms of minimizing the probability of error. That is, the receiver employs a decision rule that minimizes the probability that the detected message, \(\hat{m}\), differs from the transmitted message, \(m\).

Mathematically, the probability of error is given by

\[ \boxed{ P_e = \Pr[\hat{m} \neq m] } \]

The design of the decision rule is thus focused on minimizing \(P_e\).

Recall that, just considering modulation, the message \( m \) as mentioned above represents the specific piece of information (e.g., a sequence of \( k \) bits) that is mapped to a symbol via a constellation diagram, and that symbol is then used to generate the transmitted waveform \( s_m(t) \) for \( 1 \leq m \leq M \) with \( M = 2^k \).

Example: ML Decision Rule using Vectorized Waveforms.#

Consider the ML receiver above, in the vector formulation, the optimal detection problem reduces to selecting the signal vector \( \vec{s}_m \) that is closest to the received vector \( \vec{r} \) in terms of Euclidean distance. Mathematically, the decision rule, using equivalent finite-dimensional vector form, becomes:

\[ \hat{m} = \arg \min_{1 \le m \le M} \|\vec{r} - \vec{s}_m\|^2 \]

which is entirely equivalent to the decision rule derived in the continuous-time framework.