Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal Journal arrow
arrow-active-down-2
Institution
1
Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal Journal arrow
arrow-active-down-2
Institution
1
Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
Simulator-based Explanation and Debugging of Hazard-triggering Events in DNN-based Safety-critical Systems

When Deep Neural Networks (DNNs) are used in safety-critical systems, engineers should determine the safety risks associated with failures (i.e., erroneous outputs) observed during testing. For DNNs processing images, engineers visually inspect all failure-inducing images to determine common characteristics among them. Such characteristics correspond to hazard-triggering events (e.g., low illumination) that are essential inputs for safety analysis. Though informative, such activity is expensive and error prone. To support such safety analysis practices, we propose Simulator-based Explanations for DNN failurEs (SEDE), a technique that generates readable descriptions for commonalities in failure-inducing, real-world images and improves the DNN through effective retraining. SEDE leverages the availability of simulators, which are commonly used for cyber-physical systems. It relies on genetic algorithms to drive simulators toward the generation of images that are similar to failure-inducing, real-world images in the test set; it then employs rule learning algorithms to derive expressions that capture commonalities in terms of simulator parameter values. The derived expressions are then used to generate additional images to retrain and improve the DNN. With DNNs performing in-car sensing tasks, SEDE successfully characterized hazard-triggering events leading to a DNN accuracy drop. Also, SEDE enabled retraining leading to significant improvements in DNN accuracy, up to 18 percentage points.

Read full abstract
Open Access Icon Open Access
GRANDMA and HXMT Observations of GRB 221009A: The Standard Luminosity Afterglow of a Hyperluminous Gamma-Ray Burst—In Gedenken an David Alexander Kann

Object GRB 221009A is the brightest gamma-ray burst (GRB) detected in more than 50 yr of study. In this paper, we present observations in the X-ray and optical domains obtained by the GRANDMA Collaboration and the Insight Collaboration. We study the optical afterglow with empirical fitting using the GRANDMA+HXMT-LE data sets augmented with data from the literature up to 60 days. We then model numerically using a Bayesian approach, and we find that the GRB afterglow, extinguished by a large dust column, is most likely behind a combination of a large Milky Way dust column and moderate low-metallicity dust in the host galaxy. Using the GRANDMA+HXMT-LE+XRT data set, we find that the simplest model, where the observed afterglow is produced by synchrotron radiation at the forward external shock during the deceleration of a top-hat relativistic jet by a uniform medium, fits the multiwavelength observations only moderately well, with a tension between the observed temporal and spectral evolution. This tension is confirmed when using the augmented data set. We find that the consideration of a jet structure (Gaussian or power law), the inclusion of synchrotron self-Compton emission, or the presence of an underlying supernova do not improve the predictions. Placed in the global context of GRB optical afterglows, we find that the afterglow of GRB 221009A is luminous but not extraordinarily so, highlighting that some aspects of this GRB do not deviate from the global known sample despite its extreme energetics and the peculiar afterglow evolution.

Read full abstract
Open Access Icon Open Access
GRANDMA observations of ZTF/<i>Fink</i> transients during summer 2021

ABSTRACT We present our follow-up observations with GRANDMA of transient sources revealed by the Zwicky Transient Facility (ZTF). Over a period of six months, all ZTF alerts were examined in real time by a dedicated science module implemented in the Fink broker, which will be used in filtering of transients discovered by the Vera C. Rubin Observatory. In this article, we present three selection methods to identify kilonova candidates. Out of more than 35 million alerts, a hundred sources have passed our selection criteria. Six were then followed-up by GRANDMA (by both professional and amateur astronomers). The majority were finally classified either as asteroids or as supernovae events. We mobilized 37 telescopes, bringing together a large sample of images, taken under various conditions and quality. To complement the orphan kilonova candidates, we included three additional supernovae alerts to conduct further observations during summer 2021. We demonstrate the importance of the amateur astronomer community that contributed images for scientific analyses of new sources discovered in a magnitude range r′ = 17 − 19 mag. We based our rapid kilonova classification on the decay rate of the optical source that should exceed 0.3 mag d−1. GRANDMA’s follow-up determined the fading rate within 1.5 ± 1.2 d post-discovery, without waiting for further observations from ZTF. No confirmed kilonovae were discovered during our observing campaign. This work will be continued in the coming months in the view of preparing for kilonova searches in the next gravitational-wave observing run O4.

Read full abstract
Open Access Icon Open Access
Automatic test suite generation for key-points detection DNNs using many-objective search (experience paper)

Automatically detecting the positions of key-points (e.g., facial key-points or finger key-points) in an image is an essential problem in many applications, such as driver's gaze detection and drowsiness detection in automated driving systems. With the recent advances of Deep Neural Networks (DNNs), Key-Points detection DNNs (KP-DNNs) have been increasingly employed for that purpose. Nevertheless, KP-DNN testing and validation have remained a challenging problem because KP-DNNs predict many independent key-points at the same time -- where each individual key-point may be critical in the targeted application -- and images can vary a great deal according to many factors. In this paper, we present an approach to automatically generate test data for KP-DNNs using many-objective search. In our experiments, focused on facial key-points detection DNNs developed for an industrial automotive application, we show that our approach can generate test suites to severely mispredict, on average, more than 93% of all key-points. In comparison, random search-based test data generation can only severely mispredict 41% of them. Many of these mispredictions, however, are not avoidable and should not therefore be considered failures. We also empirically compare state-of-the-art, many-objective search algorithms and their variants, tailored for test suite generation. Furthermore, we investigate and demonstrate how to learn specific conditions, based on image characteristics (e.g., head posture and skin color), that lead to severe mispredictions. Such conditions serve as a basis for risk analysis or DNN retraining.

Read full abstract
Open Access Icon Open Access