Optimum Digital Detector – Alternative Representation#

Notations#

  • z: A complex column vector in Ck.

  • z: The element-wise complex conjugate of z. If z has components z1,z2,,zk, then z has components z1,z2,,zk.

  • zT: The transpose of z, converting it from a column vector to a row vector without taking the complex conjugate.

  • zH: The conjugate transpose (Hermitian transpose) of z. This operation involves both transposing and taking the complex conjugate, resulting in a row vector: $zH=(zT)=(z)T$

  • M1: The inverse of the covariance matrix M, which is assumed to be Hermitian (M=MH) and positive definite.

Comparing zTM1z and zHM1z#

We consider two expressions:

A. zTM1z

B. zHM1z

Expression A: zTM1z

Let z be a k×1 complex vector:

z=[z1z2zk]

Let M1 be a k×k Hermitian matrix:

M1=[M111M121M1k1M211M221M2k1Mk11Mk21Mkk1]

The complex conjugate of z:

z=[z1z2zk]

Compute zTM1z

Perform the matrix multiplication step-by-step.

Multiply zT with M1:

zTM1=[z1M111+z2M211++zkMk11z1M121+z2M221++zkMk21z1M1k1+z2M2k1++zkMkk1]

Multiply the result with z:

zTM1z=i=1kj=1kziMij1zj

This is a scalar value.

Expression B: zHM1z

Recall that:

zH=[z1z2zk]

Perform the matrix multiplication step-by-step.

Multiply zH with M1:

zHM1=[z1M111+z2M121++zkM1k1z1M211+z2M221++zkM2k1z1Mk11+z2Mk21++zkMkk1]

Multiply the result with z:

zHM1z=i=1kj=1kziMij1zj

This is also a scalar value.

Therefore:

zTM1z=zHM1z

Comparing zzT and zzH#

zzT:

zzT=[z1z2zk][z1z2zk]=[z1z1z1z2z1zkz2z1z2z2z2zkzkz1zkz2zkzk]

zzH:

zzH=[z1z2zk][z1z2zk]=[z1z1z1z2z1zkz2z1z2z2z2zkzkz1zkz2zkzk]

Therefore

zzT=zzH

Joint Probability Density Function#

The noise vector z consists of k zero-mean complex Gaussian random variables.

The joint PDF for z is given by:

p(z)=1πkdet(M)exp(zHM1z)

Covariance Matrix M#

The covariance matrix M characterizes the second-order statistical properties of the complex noise vector z.

M=E{zzH}

Probability Density Functions (PDFs) of the Measurement Vector y Under Hypotheses H0 and H1#

Given the measurement model:

y=ui+z,i=0,1

Under H0:

p(y|H0)=1πkdet(M)exp((yu0)HM1(yu0))

Under H1:

p(y|H1)=1πkdet(M)exp((yu1)HM1(yu1))

Likelihood Ratio Test (LRT)#

The likelihood ratio L(y) is defined as:

L(y)=p(y|H1)p(y|H0)

Substituting the PDFs:

L(y)=exp((yu1)HM1(yu1))exp((yu0)HM1(yu0))=exp((yu1)HM1(yu1)+(yu0)HM1(yu0))

Taking the natural logarithm of both sides:

lnL(y)=(yu0)HM1(yu0)(yu1)HM1(yu1)

Simplifying the expression:

lnL(y)=2Re[(u1u0)HM1y](u1HM1u1u0HM1u0)

Simplification of the Log-Likelihood Ratio#

Expand each quadratic form:

(yui)HM1(yui)=yHM1yyHM1uiuiHM1y+uiHM1ui

Given that M is Hermitian (M=MH), the following holds:

uiHM1y=(yHM1ui)

For simplicity in the derivation, we’ll assume that the signals u0 and u1 are such that the imaginary parts cancel out when taking the real part of the expression.

Substituting the expanded forms back into lnL(y):

lnL(y)=[yHM1yyHM1u1u1HM1y+u1HM1u1]+[yHM1yyHM1u0u0HM1y+u0HM1u0]=yHM1y+yHM1u1+u1HM1yu1HM1u1+yHM1yyHM1u0u0HM1y+u0HM1u0=(yHM1u1+u1HM1y)(yHM1u0+u0HM1y)(u1HM1u1u0HM1u0)

Group the terms involving y and those that are independent of y:

lnL(y)=[yHM1(u1u0)+(u1u0)HM1y][u1HM1u1u0HM1u0]

Recognizing that u1HM1y=(yHM1u1), and assuming that the final decision statistic is real, we can express the first bracket as twice the real part of a complex quantity:

lnL(y)=2Re[yHM1(u1u0)][u1HM1u1u0HM1u0]

Thus, the log-likelihood ratio simplifies to:

lnL(y)=2Re[(u1u0)HM1y]Weighted Sum of y[(u1HM1u1)(u0HM1u0)]Independent of y

Weighted Sum of y:

2Re[(u1u0)HM1y]

This term represents a linear combination of the measurement vector y, weighted by the difference between the signal vectors under H1 and H0, scaled by the inverse of the covariance matrix M1.

Component Independent of y:

(u1HM1u1)(u0HM1u0)

This term is a constant offset that depends solely on the signal vectors and the noise covariance.

It represents the difference in the energy or signal strength between the two hypotheses, normalized by the noise covariance.

Decision Rule#

General Decision Rule Using Likelihood Statistic

The general decision rule using the likelihood ratio L(y) is defined as:

L(y)H1H0η

Here, η is the threshold that determines the decision boundary between the two hypotheses.

Decision Rule Using Log-Likelihood Statistic

Alternatively, the decision rule can be expressed using the log-likelihood ratio (y)=lnL(y):

(y)H1H0lnηηL

Here, ηL=lnη is the logarithm of the threshold.

Simplifying the Log-Likelihood Ratio

Define a constant term C:

C=u1HM1u1u0HM1u0

Since C is independent of y, it can be incorporated into the threshold η:

η0=ηL+C2

Simplified Decision Rule

Re{(u1u0)HM1y}H1H0η0

Weight Vector h#

To further simplify the decision rule, we define a weight vector h as follows:

h=M1(u1u0)

Purpose of h: Acts as a linear filter that projects the measurement vector y onto the direction defined by the difference in signal vectors u1u0, scaled by the inverse of the noise covariance matrix.

Substituting the definition of h into the log-likelihood ratio:

(y)=2Re[(u1u0)HM1y](u1HM1u1u0HM1u0)=2Re[hHy](u1HM1u1u0HM1u0)

By defining a weight vector h and incorporating the constant term into the threshold η0, the decision rule is simplified to:

Re{hHy}H1H0η0

Where:

h=M1(u1u0)

and

η0=lnη+(u1HM1u1u0HM1u0)2