Orthogonal Signaling#
In digital communication systems, orthogonal signaling is a method of transmitting information where the individual signals are designed to be mutually independent in the sense of being orthogonal.
Orthogonal signaling leverages the property of mutual orthogonality among signals to efficiently represent data with minimal interference.
Review the Orthogonality Condition of Equal Energy Signals#
Consider a set of signals
each having equal energy. These signals are said to be orthogonal if their inner products satisfy
where the inner product \( \langle \cdot, \cdot \rangle \) is typically defined as
The orthogonality condition means that each pair of distinct signals \( s_m(t) \) and \( s_n(t) \) (with \( m \neq n \)) has no overlap in the signal space, thereby reducing interference between them.
This property is particularly useful in communication systems for separating different transmitted symbols at the receiver.
Energy of a Signal#
For each signal \( s_m(t) \), the energy \( \mathcal{E} \) is given by
Since all signals are assumed to have equal energy, we have:
When \( m = n \), the inner product returns the energy \( \mathcal{E} \) of the signal \( s_m(t) \). For \( m \neq n \), the inner product is zero, confirming the mutual orthogonality of the signals.
Waveform#
Orthonormal Basis Representation#
It is often useful to normalize the signals to form an orthonormal basis. By defining
each basis function \( \phi_j(t) \) has unit energy:
and they remain orthogonal:
This orthonormal basis is particularly convenient for analysis and implementation because it simplifies many mathematical expressions and algorithms in communication system design.
Vector Representation of Signals#
Orthogonal signals can be represented as vectors in an \( M \)-dimensional Euclidean space. A common representation is:
In this representation, each signal vector has a single nonzero component equal to \( \sqrt{\mathcal{E}} \), and the vectors are mutually orthogonal.
Minimum Distance#
Euclidean Distance#
The derivation of the Euclidean distance \( d_{mn} \) between two distinct signals \( s_m(t) \) and \( s_n(t) \) (with \( m \neq n \)) is as follows.
For any two signals \( s_m(t) \) and \( s_n(t) \), the difference signal is given by:
The Euclidean distance between the two signals is defined as the norm (or energy) of this difference signal.
The squared Euclidean distance is computed using the inner product:
Using the properties of the inner product (linearity and symmetry), we can expand the expression:
Since the inner product is symmetric, \(\langle s_m(t), s_n(t) \rangle = \langle s_n(t), s_m(t) \rangle\). Thus, the expression simplifies to:
For the signals \( s_m(t) \) and \( s_n(t) \):
The energy of each signal is given by:
\[ \langle s_m(t), s_m(t) \rangle = \mathcal{E} \quad \text{and} \quad \langle s_n(t), s_n(t) \rangle = \mathcal{E} \]Since the signals are orthogonal (for \( m \neq n \)):
\[ \langle s_m(t), s_n(t) \rangle = 0 \]
Substitute these into the expanded inner product:
Finally, take the square root of both sides, we have that for any two distinct orthogonal signals \( s_m(t) \) and \( s_n(t) \), the Euclidean distance between them is:
Minimum Distance#
Since all pairs of orthogonal signals have the same distance, the minimum distance \( d_{\text{min}} \) is:
A larger \( d_{\text{min}} \) generally leads to improved performance (i.e., lower probability of error) because the signals are more widely separated in the signal space.
Energy per Bit#
It is often useful to relate the total signal energy \( \mathcal{E} \) to the energy used to transmit one bit of information, denoted as \( \mathcal{E}_b \). When \( M \) signals are used to represent symbols, each symbol conveys \( \log_2 M \) bits. Therefore, the energy per bit is defined as:
Minimum Distance in Terms of Energy per Bit#
By substituting the relationship between \( \mathcal{E} \) and \( \mathcal{E}_b \) into the expression for the minimum distance, we obtain:
This final expression shows that the minimum distance depends on both the modulation order \( M \) (through \( \log_2 M \)) and the energy per bit \( \mathcal{E}_b \).
Increasing \( \mathcal{E}_b \) or reducing \( M \) (thus increasing \( \mathcal{E} \) per bit) can lead to a larger \( d_{\text{min}} \), which in turn improves the robustness of the signal against noise and interference.
Signal Space and Euclidean Subspace#
Signal Space: The Abstract View#
Abstract M-Dimensional Space
In communication theory, any set of \(M\) mutually orthogonal signals can be viewed as an \(M\)-dimensional orthogonal basis.
Mathematically, each signal \(s_m(t)\) is mapped to a vector \(\vec{s}_m\) in this \(M\)-dimensional space.
Orthonormal Basis Functions
Suppose we have \(M\) orthonormal waveforms \(\{\phi_1(t), \phi_2(t), \ldots, \phi_M(t)\}\).
Any signal \(s_m(t)\) that lies entirely along the \(m\)-th basis waveform can be written as
\[ s_m(t) \;=\; \sqrt{\mathcal{E}}\,\phi_m(t) \]In vector form, that means
\[ \vec{s}_m \;=\; (0,\, 0,\, \ldots,\; \underbrace{\sqrt{\mathcal{E}}}_{\text{m-th position}},\; 0, \ldots, 0) \]This is precisely the idea behind writing \(s_1 = \bigl(\sqrt{\mathcal{E}},\, 0,\, \dots,\, 0\bigr)\), \(s_2 = \bigl(0,\, \sqrt{\mathcal{E}},\, \dots,\, 0\bigr)\), etc.
Euclidean Subspace#
When we represent signals using an orthonormal set of basis functions, we are essentially working in a finite-dimensional inner product space. This space has all the properties of a Euclidean space (distances, angles, etc.).
The Original Signal Space
In general, the set of all signals is infinite-dimensional.
The Finite-Dimensional Subspace#
Spanned by Basic Functions:
When we select a finite number of orthonormal basis functions \(\{\phi_1(t), \phi_2(t), \dots, \phi_M(t)\}\), these functions span an \(M\)-dimensional subspace of \(L^2\).Representation:
Any signal \(s(t)\) that lies in this subspace can be written as:\[ s(t) = \sum_{j=1}^{M} s[j]\, \phi_j(t) \]where \(s[j] = \langle s(t), \phi_j(t) \rangle\) are the coefficients.
Euclidean Structure of the Subspace#
Inner Product:
The finite-dimensional subspace is equipped with the inner product inherited from \(L^2\):\[ \langle s, r \rangle = \sum_{j=1}^{M} s[j] \, r[j] \]Euclidean Space:
This inner product defines lengths (via the norm) and angles between the coefficient vectors. In other words, the subspace is isomorphic to the familiar \( \mathbb{R}^M \) (or \(\mathbb{C}^M\) for complex signals), which is exactly a Euclidean space.
Typical Representation
When we write:
we are representing the signal \(s_1(t)\) by its coordinates relative to the chosen orthonormal basis. Here, \(\sqrt{\mathcal{E}}\) is the coefficient along the first basis function and all other coefficients are zero.