Abstract

We present a systematic search for wide-separation (with Einstein radiusθE ≳ 1.5″), galaxy-scale strong lenses in the 30 000 deg2of the Pan-STARRS 3πsurvey on the Northern sky. With long time delays of a few days to weeks, these types of systems are particularly well-suited for catching strongly lensed supernovae with spatially-resolved multiple images and offer new insights on early-phase supernova spectroscopy and cosmography. We produced a set of realistic simulations by painting lensed COSMOS sources on Pan-STARRS image cutouts of lens luminous red galaxies (LRGs) with redshift and velocity dispersion known from the sloan digital sky survey (SDSS). First, we computed the photometry of mock lenses ingribands and applied a simple catalog-level neural network to identify a sample of 1 050 207 galaxies with similar colors and magnitudes as the mocks. Second, we trained a convolutional neural network (CNN) on Pan-STARRSgriimage cutouts to classify this sample and obtain sets of 105 760 and 12 382 lens candidates with scores ofpCNN > 0.5 and > 0.9, respectively. Extensive tests showed that CNN performances rely heavily on the design of lens simulations and the choice of negative examples for training, but little on the network architecture. The CNN correctly classified 14 out of 16 test lenses, which are previously confirmed lens systems above the detection limit of Pan-STARRS. Finally, we visually inspected all galaxies withpCNN > 0.9 to assemble a final set of 330 high-quality newly-discovered lens candidates while recovering 23 published systems. For a subset, SDSS spectroscopy on the lens central regions proves that our method correctly identifies lens LRGs atz ∼ 0.1–0.7. Five spectra also show robust signatures of high-redshift background sources, and Pan-STARRS imaging confirms one of them as a quadruply-imaged red source atzs = 1.185, which is likely a recently quenched galaxy strongly lensed by a foreground LRG atzd = 0.3155. In the future, high-resolution imaging and spectroscopic follow-up will be required to validate Pan-STARRS lens candidates and derive strong lensing models. We also expect that the efficient and automated two-step classification method presented in this paper will be applicable to the ∼4 mag deepergristacks from theRubinObservatory Legacy Survey of Space and Time (LSST) with minor adjustments.

Highlights

  • Lensed systems with time-variable sources provide competitive probes of the Hubble constant H0, which are independent of cosmic microwave background (CMB) observations (Planck Collaboration VI 2020) and the local distance ladder (Riess et al 2019; Freedman et al 2019, 2020), and allow one to assess the significance of the current H0 tension

  • We adopted a two-step approach: (1) a catalog-based neural network classification of source photometry, (2) a convolutional neural network (CNN) trained on gri image cutouts

  • As the fraction of visual grades ≥2 quickly drops with decreasing CNN scores, down to 1% when extending to 0.8 < pCNN < 0.9, we restricted our final classification to pCNN > 0.9

Read more

Summary

Introduction

Lensed systems with time-variable sources provide competitive probes of the Hubble constant H0, which are independent of cosmic microwave background (CMB) observations (Planck Collaboration VI 2020) and the local distance ladder (Riess et al 2019; Freedman et al 2019, 2020), and allow one to assess the significance of the current H0 tension. The first two strongly lensed supernovae (SNe) with spatially-resolved multiple images have been detected in recent years; one core-collapse SN was found behind the strong lensing cluster MACS J1149.5+222.3 (SN Refsdal, Kelly et al 2015), and one type Ia SN was found behind an isolated lens galaxy (iPTF16geu, Goobar et al 2017). These findings open new perspectives on future H0 measurements with lensed SNe. These findings open new perspectives on future H0 measurements with lensed SNe These types of systems are well-suited for time-delay measurements given the smooth, nonerratic SNe

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.