We consider the classical Neymann–Pearson hypothesis testing problem of signal detection, where under the null hypothesis ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathcal{ H}}_{0}$ </tex-math></inline-formula> ), the received signal is white Gaussian noise, and under the alternative hypothesis ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathcal{ H}}_{1}$ </tex-math></inline-formula> ), the received signal includes also an additional non–Gaussian random signal, which in turn can be viewed as a deterministic waveform plus zero–mean, non-Gaussian noise. However, instead of the classical likelihood ratio test detector, which might be difficult to implement, in general, we impose a (mismatched) correlation detector, which is relatively easy to implement, and we characterize the optimal correlator weights in the sense of the best trade-off between the false-alarm error exponent and the missed-detection error exponent. Those optimal correlator weights depend (non-linearly, in general) on the underlying deterministic waveform under <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathcal{ H}}_{1}$ </tex-math></inline-formula> . We then assume that the deterministic waveform may also be free to be optimized (subject to a power constraint), jointly with the correlator, and show that both the optimal waveform and the optimal correlator weights may take on values in a small finite set of typically no more than two to four levels, depending on the distribution of the non-Gaussian noise component. Finally, we outline an extension of the scope to a wider class of detectors that are based on linear combinations of the correlation and the energy of the received signal.
Read full abstract