CSST Slitless Spectra: Target Detection and Classification with Yolo

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Addressing the spatial uncertainty and spectral blending challenges in China Space Station Telescope slitless spectroscopy, we present a deep learning-driven, end-to-end framework based on the You Only Look Once (YOLO) models. This approach directly detects, classifies, and analyzes spectral traces from raw 2D images, bypassing traditional, error-accumulating pipelines. YOLOv5 effectively detects both compact zero-order and extended first-order traces, even in highly crowded fields. Building on this, YOLO11 integrates source classification (star/galaxy) and discrete astrophysical parameter estimation (e.g., redshift bins), showcasing complete spectral trace analysis without other manual preprocessing. Our framework processes large images rapidly, learning spectral–spatial features holistically to minimize errors. We achieve high trace detection precision (YOLOv5) and demonstrate successful quasar identification and binned redshift estimation (YOLO11). This study establishes machine learning as a paradigm shift in slitless spectroscopy, unifying detection, classification, and preliminary parameter estimation in a scalable system. Future research will concentrate on direct, continuous prediction of astrophysical parameters from raw spectral traces.

ReferencesShowing 10 of 24 papers
  • Open Access Icon
  • Cite Count Icon 21392
  • 10.1109/iccv.2017.324
Focal Loss for Dense Object Detection
  • Oct 1, 2017
  • Tsung-Yi Lin + 4 more

  • Open Access Icon
  • Cite Count Icon 135
  • 10.1086/596715
The Slitless Spectroscopy Data Extraction Software aXe
  • Jan 1, 2009
  • Publications of the Astronomical Society of the Pacific
  • M Kümmel + 4 more

  • Open Access Icon
  • Cite Count Icon 228
  • 10.1093/mnras/stt574
TPZ: photometric redshift PDFs and ancillary information by using prediction trees and random forests
  • May 1, 2013
  • Monthly Notices of the Royal Astronomical Society
  • Matias Carrasco Kind + 1 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 1243
  • 10.3390/make5040083
A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS
  • Nov 20, 2023
  • Machine Learning and Knowledge Extraction
  • Juan Terven + 2 more

  • Open Access Icon
  • Cite Count Icon 59
  • 10.1051/0004-6361:200809520
Contamination by field late-M, L, and T dwarfs in deep surveys
  • Jul 1, 2008
  • Astronomy & Astrophysics
  • J A Caballero + 2 more

  • Open Access Icon
  • Cite Count Icon 173
  • 10.1046/j.1365-8711.2003.06271.x
Estimating photometric redshifts with artificial neural networks
  • Mar 1, 2003
  • Monthly Notices of the Royal Astronomical Society
  • Andrew E Firth + 2 more

  • Open Access Icon
  • Cite Count Icon 996
  • 10.1103/revmodphys.82.3121
The Galactic Center massive black hole and nuclear star cluster
  • Dec 20, 2010
  • Reviews of Modern Physics
  • Reinhard Genzel + 2 more

  • Open Access Icon
  • Cite Count Icon 1967
  • 10.1046/j.1365-8711.2001.04902.x
The 2dF Galaxy Redshift Survey: spectra and redshifts
  • Dec 1, 2001
  • Monthly Notices of the Royal Astronomical Society
  • Matthew Colless + 28 more

  • Open Access Icon
  • Cite Count Icon 2376
  • 10.1007/978-3-319-10578-9_23
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
  • Jan 1, 2014
  • Kaiming He + 3 more

  • Open Access Icon
  • Cite Count Icon 3
  • 10.1093/mnras/stae1461
Galaxy Spectra neural Network (GaSNet). II. Using deep learning for spectral classification and redshift predictions
  • Jun 26, 2024
  • Monthly Notices of the Royal Astronomical Society
  • Fucheng Zhong + 27 more

Similar Papers
  • Research Article
  • Cite Count Icon 83
  • 10.1007/s11042-021-11480-0
YOLO with adaptive frame control for real-time object detection applications
  • Sep 18, 2021
  • Multimedia Tools and Applications
  • Jeonghun Lee + 1 more

You only look once (YOLO) is being used as the most popular object detection software in many intelligent video applications due to its ease of use and high object detection precision. In addition, in recent years, various intelligent vision systems based on high-performance embedded systems are being developed. Nevertheless, the YOLO still requires high-end hardware for successful real-time object detection. In this paper, we first discuss real-time object detection service of the YOLO on AI embedded systems with resource constraints. In particular, we point out the problems related to real-time processing in YOLO object detection associated with network cameras, and then propose a novel YOLO architecture with adaptive frame control (AFC) that can efficiently cope with these problems. Through various experiments, we show that the proposed AFC can maintain the high precision and convenience of YOLO, and provide real-time object detection service by minimizing total service delay, which remains a limitation of the pure YOLO.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/isceic53685.2021.00028
Analysis of target detection algorithms at different stages
  • Aug 1, 2021
  • Qian Wang

Target detection as a part of computer vision occupies an important position in the field of recognition. It has seen significant improvements in algorithm performance at every stage. You only look Once (YOLO), for example, seems to have the greatest advantage as a target detection model. It is clear that it only needs to be viewed once to identify the class and location of objects in an image. As YOLO continues to improve, it exhibits even faster and more accurate recognition. This paper discusses the features and advantages shown by the different target detection algorithms at each stage. From the analysis results, YOLO shows more advantages in object detection. YOLO detection is fast and can process streaming video in real-time. Also, the number of false background detections is less than half compared to other algorithms while showing good generalization.

  • Research Article
  • Cite Count Icon 3
  • 10.1117/1.jrs.16.048505
Multistage approach for automatic target detection and recognition in infrared imagery using deep learning
  • Nov 24, 2022
  • Journal of Applied Remote Sensing
  • Nada Baili + 3 more

Automatic target recognition (ATR) is a challenging task for several computer vision applications. It requires efficient, accurate, and robust methods for target detection and target identification. Deep learning has shown great success in many computer vision applications involving color RGB images. However, the performance of these networks in ATR with infrared sensor data needs further investigation. In this paper, we propose a multistage automatic target detection and recognition (ATDR) system that performs both target detection and target classification on infrared (IR) imagery using deep learning. Our system processes large IR image frames where targets take <1 % of the total number of pixels. First, we train a state-of-the-art object detector you only look once (YOLO) to localize all potential targets in the input image frame. Then, we train a convolutional neural network (CNN) to identify these detections as targets or false alarms. In this second phase, we adapt and analyze the performance of three CNN architectures: a compact and fully connected CNN, VGG16 with batch normalization, and a wide residual neural network (WRN). We also explore the use of a loss function that optimizes directly the area under the receiver operating characteristic (ROC) curve (AUC), and adapt it to our ATR application. To enhance the robustness of the proposed ATR to perturbation and variations introduced during the detection stage, we train our CNN classifiers on automatically detected targets using YOLO, in addition to ground truth bounding boxes and apply selected data augmentation techniques. To simulate real testing environments, where the spatial location of the targets within the image frame is unknown, only YOLO-detected boxes are used during validation. We evaluate our ATDR on a real benchmark dataset that includes different vehicles captured at different resolutions. Our experiments have shown that YOLO can detect most of the targets at the expense of generating a high number of false alarms. We show that the VGG-16 network with batch normalization, which is the best performing model, can correctly identify the classes of the targets, as well as classify the majority of YOLO’s false detections into an additional nontarget class. We also show that the proposed training modification to optimize an AUC-based loss function for ATR proved to be advantageous mainly in identifying difficult targets.

  • Conference Article
  • 10.1117/12.2635816
Target detection system for surface cracks of hot continuous steel casting based on YOLO V4 model
  • May 6, 2022
  • Renbo Zhang + 2 more

In the process of continuous casting slab production, serious defects will have an adverse impact on the subsequent rolling process. The target detection of cracks using machine vision algorithm has been increasingly applied in industry. The detection of defects in hot billets is of great significance. Adjusting the flow and flow rate of mould in advance can prevent more defective billets from being produced. In this paper, the detection system of hot billet is constructed by combining YOLO (You Only Look Once) and public data set, which can realize the defect detection in industrial production. Combined with the two algorithms of YOLO V3 and YOLO V4, the system detection results are compared, and a comparative conclusion is drawn. YOLO V4 algorithm uses multi-scale detail boosting at the input for image enhancement, and the part of neck adopts SPP module and FPN + PAN mode, and on this premise, the definition of partial loss function is changed. These changes eventually make YOLO V4 faster, more accurate and lighter. According to the experimental results of this paper, the following conclusions are drawn: This system can realize the defect detection of hot continuous steel casting in industry. The maximum value has not been greater than 0.1, while GIoU is lower than 0.02 when the epcho is greater than 200. The accuracy of YOLO V4 training prediction framework is much higher than that of YOLO V3, and the target detection is also more accurate. In terms of recall rate and average AP value of various categories, YOLO V4 is better, with a maximum increase of 0.1. At the same time, among the samples divided into positive examples into all crack categories, the average proportion of actual positive examples is also higher.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 7
  • 10.3390/jmse11010106
Long-Strip Target Detection and Tracking with Autonomous Surface Vehicle
  • Jan 5, 2023
  • Journal of Marine Science and Engineering
  • Meiyan Zhang + 4 more

As we all know, target detection and tracking are of great significance for marine exploration and protection. In this paper, we propose one Convolutional-Neural-Network-based target detection method named YOLO-Softer NMS for long-strip target detection on the water, which combines You Only Look Once (YOLO) and Softer NMS algorithms to improve detection accuracy. The traditional YOLO network structure is improved, the prediction scale is increased from threeto four, and a softer NMS strategy is used to select the original output of the original YOLO method. The performance improvement is compared totheFaster-RCNN algorithm and traditional YOLO methodin both mAP and speed, and the proposed YOLO–Softer NMS’s mAP reaches 97.09%while still maintaining the same speed as YOLOv3. In addition, the camera imaging model is used to obtain accurate target coordinate information for target tracking. Finally, using the dicyclic loop PID control diagram, the Autonomous Surface Vehicle is controlled to approach the long-strip target with near-optimal path design. The actual test results verify that our long-strip target detection and tracking method can achieve gratifying long-strip target detection and tracking results.

  • Research Article
  • 10.1177/18758967251338695
YoloGA: An Evolutionary Computation Based YOLO Algorithm to Detect Personal Protective Equipment
  • May 15, 2025
  • Journal of Intelligent &amp; Fuzzy Systems: Applications in Engineering and Technology
  • Amit Majumder + 1 more

Personal Protective Equipment (PPE) detection plays a critical role in ensuring workplace safety and compliance with industrial regulations. Traditional object detection algorithms, such as YOLO (You Only Look Once), provide real-time and accurate detection capabilities but often require extensive manual tuning of hyperparameters and anchor boxes for optimal performance. This paper explores the integration of evolutionary computation with YOLO to develop an adaptive, high-precision PPE detection system. Due to the impossibility of 24-h human supervision, it has long been not easy to guarantee the use of PPE. However, such monitoring may likely be carried out using technological aids or automated programs. The current study outlines a systematic method for tracking employees’ PPEs, like hard hats, safety vests, etc., in real-time using Deep Learning (DL) models constructed on the YOLO architecture. The suggested method employs a small architecture of YOLO (i.e., YOLOv8s) and a Genetic Algorithm (GA) based evolutionary computation for object detection and localization. With this method, we have built a model with a Mean Average Precision (mAP) value of 87.2% on the validation data set and 83.1% on the test data set, highlighting the effectiveness of evolutionary optimization in refining object detection performance. This framework presents a scalable and automated solution for PPE monitoring, contributing to enhanced workplace safety through Artificial Intelligence.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 16
  • 10.3389/fnbot.2023.1058723
Real-time vehicle target detection in inclement weather conditions based on YOLOv4
  • Mar 9, 2023
  • Frontiers in Neurorobotics
  • Rui Wang + 6 more

As a crucial component of the autonomous driving task, the vehicle target detection algorithm directly impacts driving safety, particularly in inclement weather situations, where the detection precision and speed are significantly decreased. This paper investigated the You Only Look Once (YOLO) algorithm and proposed an enhanced YOLOv4 for real-time target detection in inclement weather conditions. The algorithm uses the Anchor-free approach to tackle the problem of YOLO preset anchor frame and poor fit. It better adapts to the detected target size, making it suitable for multi-scale target identification. The improved FPN network transmits feature maps to unanchored frames to expand the model's sensory field and maximize the utilization of model feature data. Decoupled head detecting head to increase the precision of target category and location prediction. The experimental dataset BDD-IW was created by extracting specific labeled photos from the BDD100K dataset and fogging some of them to test the proposed method's practical implications in terms of detection precision and speed in Inclement weather conditions. The proposed method is compared to advanced target detection algorithms in this dataset. Experimental results indicated that the proposed method achieved a mean average precision of 60.3%, which is 5.8 percentage points higher than the original YOLOv4; the inference speed of the algorithm is enhanced by 4.5 fps compared to the original, reaching a real-time detection speed of 69.44 fps. The robustness test results indicated that the proposed model has considerably improved the capacity to recognize targets in inclement weather conditions and has achieved high precision in real-time detection.

  • Research Article
  • 10.1080/01490419.2024.2393190
Adversarial enhancement generation method for side-scan sonar images based on DDPM–YOLO
  • Aug 15, 2024
  • Marine Geodesy
  • Chengyang Peng + 4 more

As marine exploration advances, efficient and accurate seabed target identification becomes increasingly critical. However, traditional methods and current technologies face challenges such as scarce samples and complex imaging conditions when dealing with side-scan sonar images. Given the scarcity of sample augmentation methods for side-scan sonar, this paper iteratively trains the Denoising Diffusion Probabilistic Models(DDPM), integrating the DDPM diffusion model and the downstream You Only Look Once(YOLO) retrieval task into a mutually reinforcing framework, proposing an adversarial enhancement generation method based on the DDPM and YOLO detection models. Experiments demonstrate that the DDPM model generated through this adversarial enhancement generation method can improve the accuracy of downstream YOLO target detection tasks by 7%. The images generated by this model also perform optimally on the Fréchet Inception Distance(FID), Maximum Mean Discrepancy(MMD), and Learned Perceptual Image Patch Similarity(LPIPS) metrics, thereby proving that our method can enhance the quality of generated images from the side-scan sonar diffusion model and offer a new avenue for improving the construction of underwater target detection models.

  • Research Article
  • Cite Count Icon 5
  • 10.1080/00036818708839701
Some features of the maps from potential to spectral data
  • Jan 1, 1987
  • Applicable Analysis
  • Robert Carrol

Some significant spectral quantities for half line impedance problems are displayed and studied as functions of the appropriate potentials. Localizations (Frechet derivatives) are obtained in terms of products of einenfunctions; a systematic development of Marcenko (M) equations is given with recovery formulas for potentials via spectral traces of transmutation kernels containing appropriate spectral data; a spectral trace deduced from calculations with Gelfand-Levitan (G-L) kernels containing suitable spectral data leads to formulas for a kind of spectral transform (IST) extending the Sine transform with products of einenfunctions in the kernels

  • Research Article
  • Cite Count Icon 2
  • 10.5121/sipij.2021.12103
Target Detection and Classification Improvements using Contrast Enhanced 16-bit Infrared Videos
  • Feb 28, 2021
  • Signal &amp; Image Processing : An International Journal
  • Chiman Kwan + 1 more

In our earlier target detection and classification papers, we used 8-bit infrared videos in the Defense Systems Information Analysis Center(DSIAC) video dataset. In this paper, we focus on how we can improve the target detection and classification results using 16-bit videos. One problem with the 16-bit videos is that some image frames have very low contrast. Two methods were explored to improve upon previous detection and classification results. The first method used to improve contrast was effectively the same as the baseline 8-bit video data but using the 16-bit raw data rather than the 8-bit data taken from the avi files. The second method used was a second order histogram matching algorithm that preserves the 16-bit nature of the videos while providing normalization and contrast enhancement. Results showed the second order histogram matching algorithm improved the target detection using You Only Look Once (YOLO) and classificationusing Residual Network (ResNet) performance. The average precision (AP) metric in YOLO was improved by 8%. This is quite significant. The overall accuracy (OA) of ResNet has been improved by 12%. This is also very significant.

  • Research Article
  • 10.5281/zenodo.4632865
Target Detection and Classification Improvements using Contrast Enhanced 16-bit Infrared Videos
  • Jan 1, 2021
  • Zenodo (CERN European Organization for Nuclear Research)
  • Chiman Kwan

In our earlier target detection and classification papers, we used 8-bit infrared videos in the Defense Systems Information Analysis Center(DSIAC) video dataset. In this paper, we focus on how we can improve the target detection and classification results using 16-bit videos. One problem with the 16-bit videos is that some image frames have very low contrast. Two methods were explored to improve upon previous detection and classification results. The first method used to improve contrast was effectively the same as the baseline 8-bit video data but using the 16-bit raw data rather than the 8-bit data taken from the avi files. The second method used was a second order histogram matching algorithm that preserves the 16-bit nature of the videos while providing normalization and contrast enhancement. Results showed the second order histogram matching algorithm improved the target detection using You Only Look Once (YOLO) and classification using Residual Network (ResNet) performance. The average precision (AP) metric in YOLO was improved by 8%. This is quite significant. The overall accuracy (OA) of ResNet has been improved by 12%. This is also very significant.

  • Research Article
  • Cite Count Icon 23
  • 10.1093/mnras/stz683
Impact of astrophysics on cosmology forecasts for 21 cm surveys
  • Mar 14, 2019
  • Monthly Notices of the Royal Astronomical Society
  • Hamsa Padmanabhan + 2 more

We use the results of previous work building a halo model formalism for the distribution of neutral hydrogen, along with experimental parameters of future radio facilities, to place forecasts on astrophysical and cosmological parameters from next generation surveys. We consider 21 cm intensity mapping surveys conducted using the BINGO, CHIME, FAST, TianLai, MeerKAT and SKA experimental configurations. We work with the 5-parameter cosmological dataset of {$\Omega_m, \sigma_8, h, n_s, \Omega_b$} assuming a flat $\Lambda$CDM model, and the astrophysical parameters {$v_{c,0}, \beta$} which represent the cutoff and slope of the HI- halo mass relation. We explore (i) quantifying the effects of the astrophysics on the recovery of the cosmological parameters, (ii) the dependence of the cosmological forecasts on the details of the astrophysical parametrization, and (iii) the improvement of the constraints on probing smaller scales in the HI power spectrum. For an SKA I MID intensity mapping survey alone, probing scales up to $\ell_{\rm max} = 1000$, we find a factor of $1.1 - 1.3$ broadening in the constraints on $\Omega_b$ and $\Omega_m$, and of $2.4 - 2.6$ on $h$, $n_s$ and $\sigma_8$, if we marginalize over astrophysical parameters without any priors. However, even the prior information coming from the present knowledge of the astrophysics largely alleviates this broadening. These findings do not change significantly on considering an extended HIHM relation, illustrating the robustness of the results to the choice of the astrophysical parametrization. Probing scales up to $\ell_{\rm max} = 2000$ improves the constraints by factors of 1.5-1.8. The forecasts improve on increasing the number of tomographic redshift bins, saturating, in many cases, with 4 - 5 redshift bins. We also forecast constraints for intensity mapping with other experiments, and draw similar conclusions.

  • Research Article
  • Cite Count Icon 2
  • 10.1155/2022/1010767
Scene-Specialized Multitarget Detector with an SMC-PHD Filter and a YOLO Network
  • Apr 28, 2022
  • Computational Intelligence and Neuroscience
  • Qianli Liu + 3 more

You only look once (YOLO) is one of the most efficient target detection networks. However, the performance of the YOLO network decreases significantly when the variation between the training data and the real data is large. To automatically customize the YOLO network, we suggest a novel transfer learning algorithm with the sequential Monte Carlo probability hypothesis density (SMC-PHD) filter and Gaussian mixture probability hypothesis density (GM-PHD) filter. The proposed framework can automatically customize the YOLO framework with unlabelled target sequences. The frames of the unlabelled target sequences are automatically labelled. The detection probability and clutter density of the SMC-PHD filter and GM-PHD are applied to retrain the YOLO network for occluded targets and clutter. A novel likelihood density with the confidence probability of the YOLO detector and visual context indications is implemented to choose target samples. A simple resampling strategy is proposed for SMC-PHD YOLO to address the weight degeneracy problem. Experiments with different datasets indicate that the proposed framework achieves positive outcomes relative to state-of-the-art frameworks.

  • Research Article
  • Cite Count Icon 2
  • 10.56042/ijms.v50i11.66765
Sea-surface object detection scheme for USV under foggy environment
  • Nov 1, 2021
  • Indian Journal of Geo-Marine Sciences

Sea-surface target detection is investigated for the visual image-based autonomous control of an Unmanned Surface Vessel (USV). A traditional way is to dehaze for sea-surface images in the previous target detection algorithms. However, it would cause a problem that the image dehaze performance and detection speed are difficult to be balanced. To solve the above problem, a YOLO (You Only Look Once) based target detection network with good anti-fog ability is proposed for sea-surface target detection. In this proposed method, the target detection network is trained off-line to obtain a good anti-fog ability and the target detection is performed on-line. A hazed sample generation model is built based on atmospheric single scattering inverse model to obtain sufficient samples for the off-line training in the proposed method. And then, the target detection network is trained based on the generated samples to obtain good anti-fog ability according to a new learning strategy. Finally, comparative experimental results demonstrate the effectiveness of the proposed target detection algorithm.

  • Research Article
  • 10.5281/zenodo.5101396
Practical Approaches to Target Detection in Long Range and Low Quality Infrared Videos
  • Jun 28, 2021
  • Zenodo (CERN European Organization for Nuclear Research)
  • Chiman Kwan + 1 more

It is challenging to detect vehicles in long range and low quality infrared videos using deep learning techniques such as You Only Look Once (YOLO) mainly due to small target size. This is because small targets do not have detailed texture information. This paper focuses on practical approaches for target detection in infrared videos using deep learning techniques. We first investigated a newer version of You Only Look Once (YOLO v4). We then proposed a practical and effective approach by training the YOLO model using videos from longer ranges. Experimental results using real infrared videos ranging from 1000 m to 3500 m demonstrated huge performance improvements. In particular, the average detection percentage over the six ranges of 1000 m to 3500 m improved from 54% when we used the 1500 m videos for training to 95% if we used the 3000 m videos for training.

More from: The Astronomical Journal
  • New
  • Research Article
  • 10.3847/1538-3881/ae0a11
Identifying Close-in Jupiters that Arrived via Disk Migration: Evidence of Primordial Alignment, Preference of Nearby Companions and Hint of Runaway Migration
  • Nov 7, 2025
  • The Astronomical Journal
  • Yugo Kawai + 4 more

  • New
  • Research Article
  • 10.3847/1538-3881/ae0cc4
MEGA Mass Assembly with JWST: The MIRI EGS Galaxy and Active Galactic Nucleus Survey
  • Nov 7, 2025
  • The Astronomical Journal
  • Bren E Backhaus + 10 more

  • New
  • Research Article
  • 10.3847/1538-3881/adfc50
An Independent Search for Small Long-period Planets in Kepler Data. I. Detection Pipeline
  • Nov 7, 2025
  • The Astronomical Journal
  • Oryna Ivashtenko + 1 more

  • New
  • Research Article
  • 10.3847/1538-3881/ae0cbd
Revised Mass and Orbit of ε Eridani b: A 1 MJup Planet on a Near-circular Orbit
  • Nov 7, 2025
  • The Astronomical Journal
  • William Thompson + 4 more

  • New
  • Research Article
  • 10.3847/1538-3881/ae0c0a
Characterizing Temperatures of Flares on the M Dwarf Wolf 359 from Simultaneous Multiband Optical Observations
  • Nov 5, 2025
  • The Astronomical Journal
  • Chia-Lung Lin + 4 more

  • New
  • Research Article
  • 10.3847/1538-3881/ae019a
A Comprehensive Reanalysis of K2-18 b’s JWST NIRISS+NIRSpec Transmission Spectrum
  • Nov 5, 2025
  • The Astronomical Journal
  • Stephen P Schmidt + 15 more

  • Research Article
  • 10.3847/1538-3881/ae1004
Ionized Gas Components in Low Surface Brightness Galaxy AGC 111629
  • Nov 3, 2025
  • The Astronomical Journal
  • Tian-Wen Cao + 10 more

  • Research Article
  • 10.3847/1538-3881/ae07d2
Characterizing the Time Variability of 2M1207 A + b with JWST NIRSpec/PRISM
  • Nov 3, 2025
  • The Astronomical Journal
  • Arthur D Adams + 15 more

  • Addendum
  • 10.3847/1538-3881/ae0c9a
Erratum: “Prototype Faraday Rotation Measure Catalogs from the Polarisation Sky Survey of the Universe’s Magnetism (POSSUM) Pilot Observations” (2024, AJ, 167, 226)
  • Oct 30, 2025
  • The Astronomical Journal
  • S Vanderwoude + 14 more

  • Research Article
  • 10.3847/1538-3881/ae0a2f
HWO Target Stars and Systems: A Survey of Archival UV and X-Ray Data
  • Oct 30, 2025
  • The Astronomical Journal
  • Sarah Peacock + 16 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon