Abstract

The optimum processing is found to detect a signal sequence in a set of M "images" with a common background; the "images" are the final counts, after a fixed reception time, for each of J spatially separate detectors and for each of M separate receptions. The spatially varying background is assumed either unknown or normal with known mean and covariance; results are generally confined to the device noise limited case. It is shown that, unlike the classical case, optimum signal selection, along with optimum processing, can not entirely eliminate the detectability loss due to the background.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.