Waveform and Vector AWGN Channel Description#

The waveform Additive White Gaussian Noise (AWGN) channel is a fundamental model in communication theory, characterized by a simple yet powerful input-output relationship. Specifically, the received signal \( r(t) \) is expressed as:

\[ r(t) = s_m(t) + n(t) \]

Here, \( s_m(t) \) represents one of \( M \) possible transmitted signals, chosen from the set

\[ \{s_1(t), s_2(t), \ldots, s_M(t)\} \]

Each signal \( s_m(t) \) has an associated prior probability \( P_m \), which reflects the likelihood of that specific signal being transmitted. These signals are deterministic waveforms, typically designed to carry information over the channel, and the index \( m \) (where \( 1 \leq m \leq M \)) identifies which signal is sent in a given transmission.

The term \( n(t) \) represents the noise component, modeled as a zero-mean white Gaussian process. This means that the noise has an average value of zero and is fully described by a Gaussian distribution. The power spectral density of \( n(t) \) is constant across all frequencies and given by

\[ \frac{N_0}{2} \]

where \( N_0 \) is the noise power spectral density (in watts per hertz). This “white” property implies that the noise is uncorrelated over time, making it a challenging yet mathematically tractable interference to deal with in signal detection.

To represent these signals in a more manageable form, we assume the use of the Gram-Schmidt procedure, a method from linear algebra that constructs an orthonormal basis

\[ \{\phi_j(t), 1 \leq j \leq N\} \]

This basis spans the space of the \( M \) possible signals, where \( N \) is the dimensionality of this space (often \( N \leq M \), depending on the signal set’s linear independence). Each signal \( s_m(t) \) can then be expressed as a vector \( \vec{s}_m \) in this \( N \)-dimensional space, providing a compact and computationally efficient representation for further analysis.

Noise Decomposition in the Waveform Channel#

The noise process \( n(t) \) in the AWGN channel is continuous and infinite-dimensional, meaning it cannot be fully captured by a finite set of basis functions like

\[ \{\phi_j(t)\}_{j=1}^N \]

To handle this, we decompose \( n(t) \) into two distinct components based on the signal space defined by the orthonormal basis.

The first component, \( n_1(t) \), is the projection of the noise onto the \( N \)-dimensional space spanned by \( \{\phi_j(t)\} \). Mathematically, it is written as:

\[ n_1(t) = \sum_{j=1}^{N} n_j \phi_j(t) \]

where

\[ n_j = \langle n(t), \phi_j(t) \rangle \]

is the inner product (or projection) of the noise onto the \( j \)-th basis function. Each \( n_j \) is a random variable, and because \( n(t) \) is a zero-mean Gaussian process and the basis functions are orthonormal, the \( n_j \)‘s are independent Gaussian random variables with zero mean and variance

\[ \frac{N_0}{2} \]

This component represents the part of the noise that interferes directly with the signal in the signal space.

The second component, \( n_2(t) \), is the remainder of the noise that lies outside this \( N \)-dimensional space:

\[ n_2(t) = n(t) - n_1(t) \]

This term captures all the noise energy that cannot be expressed using the basis \( \{\phi_j(t)\} \). Since it is orthogonal to the signal space (by construction of the Gram-Schmidt process), \( n_2(t) \) does not affect the projections of the received signal onto the basis functions.

Vector Representation of the Received Signal#

Given the orthonormal basis, each transmitted signal \( s_m(t) \) can be expanded as:

\[ s_m(t) = \sum_{j=1}^{N} s_{mj} \phi_j(t) \]

where

\[ s_{mj} = \langle s_m(t), \phi_j(t) \rangle \]

is the \( j \)-th coordinate of the signal \( s_m(t) \) in the basis.

By combining this expansion with the noise decomposition, the received signal can be expressed as:

\[ r(t) = \sum_{j=1}^{N} (s_{mj} + n_j) \phi_j(t) + n_2(t) \]

To simplify, we define the received coefficients:

\[ r_j = s_{mj} + n_j \]

where

\[ r_j = \langle r(t), \phi_j(t) \rangle = \langle s_m(t) + n(t), \phi_j(t) \rangle = \langle s_m(t), \phi_j(t) \rangle + \langle n(t), \phi_j(t) \rangle \]

Here, \( r_j \) is the projection of the received signal onto the \( j \)-th basis function, capturing both the signal component and the relevant noise contribution.

Thus, the received signal can be rewritten as:

\[ r(t) = \sum_{j=1}^{N} r_j \phi_j(t) + n_2(t) \]

where

\[ \sum_{j=1}^{N} r_j \phi_j(t) \]

represents the component of \( r(t) \) in the signal space, while \( n_2(t) \) is the residual noise lying outside this space.

Independence and Relevance of Components#

A key insight is that \( n_2(t) \) is uncorrelated with all \( n_j \)’s. Since \( n(t) \) is Gaussian, and uncorrelatedness implies independence for Gaussian processes, \( n_2(t) \) is independent of

\[ n_1(t) = \sum_{j=1}^{N} n_j \phi_j(t) \]

Furthermore, \( n_2(t) \) is independent of \( s_m(t) \), as the signal is deterministic and confined to the signal space, while \( n_2(t) \) is orthogonal to it.

In the expression for \( r(t) \), the two parts—

\[ \sum_{j=1}^{N} r_j \phi_j(t) \quad \text{and} \quad n_2(t) \]

are therefore independent. The first part,

\[ \sum_{j=1}^{N} r_j \phi_j(t) \]

contains all the information about the transmitted signal \( s_m(t) \) plus the relevant noise \( n_1(t) \). The second part, \( n_2(t) \), being independent of both the signal and the first component, carries no information about \( s_m(t) \).

For detection purposes, only the signal-space component matters. The optimal detector, which seeks to determine which \( s_m(t) \) was sent based on \( r(t) \), can ignore \( n_2(t) \) without any loss of performance. This is because \( n_2(t) \) does not influence the projections \( r_j \), which are sufficient statistics for detecting the signal. In essence, \( n_2(t) \) is irrelevant to the decision process.

This leads to a critical conclusion: the waveform AWGN channel, originally given by

\[ r(t) = s_m(t) + n(t), \quad 1 \leq m \leq M \]

is equivalent, for detection purposes, to an \( N \)-dimensional vector channel:

\[ \boxed{ \vec{r} = \vec{s}_m + \vec{n}, \quad 1 \leq m \leq M } \]

where

\[ \vec{r} = [r_1, r_2, \ldots, r_N], \quad \vec{s}_m = [s_{m1}, s_{m2}, \ldots, s_{mN}], \quad \text{and} \quad \vec{n} = [n_1, n_2, \ldots, n_N] \]

This vector model simplifies the analysis and design of communication systems, reducing the infinite-dimensional waveform problem to a finite-dimensional vector problem where each dimension corresponds to a basis function projection.

This elaboration provides a comprehensive understanding of the waveform and vector AWGN channel models, emphasizing the signal representation, noise decomposition, and the equivalence between the two formulations for optimal detection.

Mathematical Model of the Vector AWGN Channel#

Building on the equivalence established between the waveform AWGN channel and its vector representation, we now explore the vector AWGN channel in detail. The mathematical model is expressed as:

\[ \vec{r} = \vec{s}_m + \vec{n} \]

Here, \( \vec{r} \), \( \vec{s}_m \), and \( \vec{n} \) are \( N \)-dimensional real vectors.

  • The received signal vector is given by

    \[ \vec{r} = [r_1, r_2, \ldots, r_N] \]

    where each component \( r_j \) is the projection of the received waveform \( r(t) \) onto the \( j \)-th orthonormal basis function \( \phi_j(t) \), as derived earlier.

  • The transmitted signal vector is

    \[ \vec{s}_m = [s_{m1}, s_{m2}, \ldots, s_{mN}] \]

    where each element

    \[ s_{mj} = \langle s_m(t), \phi_j(t) \rangle \]

    represents the projection of the transmitted waveform \( s_m(t) \) onto the basis. The index \( m \) (\( 1 \leq m \leq M \)) identifies the specific message chosen from the set \( \{1, 2, \dots, M\} \). The selection of \( m \) follows the prior probabilities \( P_m \), reflecting the likelihood of each message being sent.

  • The noise vector is

    \[ \vec{n} = [n_1, n_2, \ldots, n_N] \]

    where each noise component

    \[ n_j = \langle n(t), \phi_j(t) \rangle \]

    represents the projection of the white Gaussian noise process \( n(t) \) onto the basis functions.

Since \( n(t) \) is a zero-mean white Gaussian noise process, the noise components \( n_j \) are independent and identically distributed (i.i.d.), following the Gaussian distribution:

\[ n_j \sim \mathcal{N}(0, N_0/2) \]

The variance \( N_0/2 \) arises from the power spectral density of the noise, and the independence results from the orthonormality of the basis functions \( \{\phi_j(t)\} \).

Noise Probability Density Function#

Since the noise components are i.i.d. Gaussian, the probability density function (PDF) of the noise vector \( \vec{n} \) follows a multivariate Gaussian distribution:

\[ p(\vec{n}) = \left( \frac{1}{\sqrt{\pi N_0}} \right)^N \exp\left( -\frac{\sum_{j=1}^N n_j^2}{N_0} \right) \]

Using the Euclidean norm \( \|\vec{n}\|^2 = \sum_{j=1}^{N} n_j^2 \), this can be rewritten as:

\[ p(\vec{n}) = \left( \frac{1}{\sqrt{\pi N_0}} \right)^N \exp\left( -\frac{\|\vec{n}\|^2}{N_0} \right) \]

The normalization factor

\[ \left( \frac{1}{\sqrt{\pi N_0}} \right)^N = \left( \frac{1}{\sqrt{2\pi (N_0/2)}} \right)^N \]

ensures the PDF integrates to 1, which is the standard form of an \( N \)-dimensional Gaussian distribution with variance \( N_0/2 \) per dimension.

Implications of the Vector Model#

This vector AWGN channel model is the direct counterpart to the waveform AWGN channel in time domain:

\[ r(t) = s_m(t) + n(t) \]

Through the orthonormal basis expansion, the problem is reduced to \( N \)-dimensional vector space, where signal and noise interactions are described using finite-dimensional linear algebra. This makes the model a powerful framework for optimal signal detection and simplifies the analysis and design of communication systems.