Applications of machine learning in gravitational-wave research with current interferometric detectors

  • Abstract
  • Highlights & Summary
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

This article provides an overview of the current state of machine learning in gravitational-wave research with interferometric detectors. Such applications are often still in their early days, but have reached sufficient popularity to warrant an assessment of their impact across various domains, including detector studies, noise and signal simulations, and the detection and interpretation of astrophysical signals. In detector studies, machine learning could be useful to optimize instruments like LIGO, Virgo, KAGRA, and future detectors. Algorithms could predict and help in mitigating environmental disturbances in real time, ensuring detectors operate at peak performance. Furthermore, machine-learning tools for characterizing and cleaning data after it is taken have already become crucial tools for achieving the best sensitivity of the LIGO–Virgo–KAGRA network. In data analysis, machine learning has already been applied as an alternative to traditional methods for signal detection, source localization, noise reduction, and parameter estimation. For some signal types, it can already yield improved efficiency and robustness, though in many other areas traditional methods remain dominant. As the field evolves, the role of machine learning in advancing gravitational-wave research is expected to become increasingly prominent. This report highlights recent advancements, challenges, and perspectives for the current detector generation, with a brief outlook to the next generation of gravitational-wave detectors.

Similar Papers
  • Research Article
  • Cite Count Icon 68
  • 10.1016/j.jmsy.2023.10.010
The advance of digital twin for predictive maintenance: The role and function of machine learning
  • Oct 19, 2023
  • Journal of Manufacturing Systems
  • Chong Chen + 4 more

The advance of digital twin for predictive maintenance: The role and function of machine learning

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.2289919
Recent progress on monolithic fiber amplifiers for next generation of gravitational wave detectors
  • Feb 26, 2018
  • Dietmar Kracht + 11 more

Single-frequency fiber amplifiers in MOPA configuration operating at 1064 nm (Yb3p) and around 1550 nm (Er3p or Er3p:Yb3p) are promising candidates to fulfill the challenging requirements of laser sources of the next generation of interferometric gravitational wave detectors (GWDs). Most probably, the next generation of GWDs is going to operate not only at 1064 nm but also at 1550 nm to cover a broader range of frequencies in which gravitational waves are detectable. We developed an engineering fiber amplifier prototype at 1064 nm emitting 215 W of linearly-polarized light in the TEM00 mode. The system consists of three modules: the seed source, the pre-amplifier, and the main amplifier. The modular design ensures reliable long-term operation, decreases system complexity and simplifies repairing and maintenance procedures. It also allows for the future integration of upgraded fiber amplifier systems without excessive downtimes. We also developed and characterized a fiber amplifier prototype at around 1550 nm that emits 100 W of linearly-polarized light in the TEM00 mode. This prototype uses an Er3p:Yb3p codoped fiber that is pumped off-resonant at 940 nm. The off-resonant pumping scheme improves the Yb3p-to-Er3p energy transfer and prevents excessive generation of Yb3p-ASE.

  • Research Article
  • Cite Count Icon 12
  • 10.1016/j.cja.2022.08.011
Surrogate role of machine learning in motor-drive optimization for more-electric aircraft applications
  • Aug 20, 2022
  • Chinese Journal of Aeronautics
  • Yuan Gao + 5 more

Surrogate role of machine learning in motor-drive optimization for more-electric aircraft applications

  • Research Article
  • 10.2196/60697
The Role of Machine Learning in the Detection of Cardiac Fibrosis in Electrocardiograms: Scoping Review.
  • Dec 30, 2024
  • JMIR cardio
  • Julia Handra + 12 more

Cardiovascular disease remains the leading cause of mortality worldwide. Cardiac fibrosis impacts the underlying pathophysiology of many cardiovascular diseases by altering structural integrity and impairing electrical conduction. Identifying cardiac fibrosis is essential for the prognosis and management of cardiovascular disease; however, current diagnostic methods face challenges due to invasiveness, cost, and inaccessibility. Electrocardiograms (ECGs) are widely available and cost-effective for monitoring cardiac electrical activity. While ECG-based methods for inferring fibrosis exist, they are not commonly used due to accuracy limitations and the need for cardiac expertise. However, the ECG shows promise as a target for machine learning (ML) applications in fibrosis detection. This study aims to synthesize and critically evaluate the current state of ECG-based ML approaches for cardiac fibrosis detection. We conducted a scoping review of research in ECG-based ML applications to identify cardiac fibrosis. Comprehensive searches were performed in PubMed, IEEE Xplore, Scopus, Web of Science, and DBLP databases, including publications up to October 2024. Studies were included if they applied ML techniques to detect cardiac fibrosis using ECG or vectorcardiogram data and provided sufficient methodological details and outcome metrics. Two reviewers independently assessed eligibility and extracted data on the ML models used, their performance metrics, study designs, and limitations. We identified 11 studies evaluating ML approaches for detecting cardiac fibrosis using ECG data. These studies used various ML techniques, including classical (8/11, 73%), ensemble (3/11, 27%), and deep learning models (4/11, 36%). Support vector machines were the most used classical model (6/11, 55%), with the best-performing models of each study achieving accuracies of 77% to 93%. Among deep learning approaches, convolutional neural networks showed promising results, with one study reporting an area under the receiver operating characteristic curve (AUC) of 0.89 when combined with clinical features. Notably, a large-scale convolutional neural network study (n=14,052) achieved an AUC of 0.84 for detecting cardiac fibrosis, outperforming cardiologists (AUC 0.63-0.66). However, many studies had limited sample sizes and lacked external validation, potentially impacting the generalizability of the findings. Variability in reporting methods may affect the reproducibility and applicability of these ML-based approaches. ML-augmented ECG analysis shows promise for accessible and cost-effective detection of cardiac fibrosis. However, there are common limitations with respect to study design and insufficient external validation, raising concerns about the generalizability and clinical applicability of the findings. Inconsistencies in methodologies and incomplete reporting further impede cross-study comparisons. Future work may benefit from using prospective study designs, larger and more clinically and demographically diverse datasets, advanced ML models, and rigorous external validation. Addressing these challenges could pave the way for the clinical implementation of ML-based ECG detection of cardiac fibrosis to improve patient outcomes and health care resource allocation.

  • Research Article
  • 10.1097/crd.0000000000000715
Role of Machine Learning and Artificial Intelligence in Arrhythmias and Electrophysiology.
  • May 18, 2024
  • Cardiology in Review
  • Muhammad Umer Riaz Gondal + 7 more

Machine learning (ML), a subset of artificial intelligence (AI) centered on machines learning from extensive datasets, stands at the forefront of a technological revolution shaping various facets of society. Cardiovascular medicine has emerged as a key domain for ML applications, with considerable efforts to integrate these innovations into routine clinical practice. Within cardiac electrophysiology, ML applications, especially in the automated interpretation of electrocardiograms, have garnered substantial attention in existing literature. However, less recognized are the diverse applications of ML in cardiac electrophysiology and arrhythmias, spanning basic science research on arrhythmia mechanisms, both experimental and computational, as well as contributions to enhanced techniques for mapping cardiac electrical function and translational research related to arrhythmia management. This comprehensive review delves into various ML applications within the scope of this journal, organized into 3 parts. The first section provides a fundamental understanding of general ML principles and methodologies, serving as a foundational resource for readers interested in exploring ML applications in arrhythmia research. The second part offers an in-depth review of studies in arrhythmia and electrophysiology that leverage ML methodologies, showcasing the broad potential of ML approaches. Each subject is thoroughly outlined, accompanied by a review of notable ML research advancements. Finally, the review delves into the primary challenges and future perspectives surrounding ML-driven cardiac electrophysiology and arrhythmias research.

  • Research Article
  • 10.2196/77494
Machine Learning in Health Economic Evaluations: Protocol for a Scoping Review
  • Sep 24, 2025
  • JMIR Research Protocols
  • Hanan Daghash + 8 more

BackgroundIn recent years, the development of machine learning (ML) applications has increased substantially, indicating the potential role of ML in transforming health care. However, the integration of ML approaches into health economic evaluations is underexplored and has several challenges.ObjectiveThis scoping review aims to explore the applications of ML in health economic evaluations. This review will also seek to identify some potential challenges to the use of ML in health economic evaluations.MethodsThis review will use PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) methods. The search will be conducted on MEDLINE (Ovid), Embase (Ovid), IEEE Xplore, and Cochrane Library databases. The eligibility criteria of the selection process will be based on the study types, data sources, methods, and outcomes (SDMO) framework approach.ResultsThe database search yielded 4141 records after removal of retractions and duplicates. Title and abstract screening of 3718 records has been completed, resulting in 30 reports retrieved for eligibility assessment. Data extraction and charting are currently in progress. The results will be published in peer-reviewed journals by the end of 2025.ConclusionsThis review will help to build up the current understanding of how ML applications are integrated in health economics evaluations. This will also explore the potential barriers to and challenges of using ML in health economics evaluations.International Registered Report Identifier (IRRID)DERR1-10.2196/77494

  • Research Article
  • Cite Count Icon 123
  • 10.1145/3545574
The Role of Machine Learning in Cybersecurity
  • Mar 7, 2023
  • Digital Threats: Research and Practice
  • Giovanni Apruzzese + 6 more

Machine Learning (ML) represents a pivotal technology for current and future information systems, and many domains already leverage the capabilities of ML. However, deployment of ML in cybersecurity is still at an early stage, revealing a significant discrepancy between research and practice. Such a discrepancy has its root cause in the current state of the art, which does not allow us to identify the role of ML in cybersecurity. The full potential of ML will never be unleashed unless its pros and cons are understood by a broad audience. This article is the first attempt to provide a holistic understanding of the role of ML in the entire cybersecurity domain—to any potential reader with an interest in this topic. We highlight the advantages of ML with respect to human-driven detection methods, as well as the additional tasks that can be addressed by ML in cybersecurity. Moreover, we elucidate various intrinsic problems affecting real ML deployments in cybersecurity. Finally, we present how various stakeholders can contribute to future developments of ML in cybersecurity, which is essential for further progress in this field. Our contributions are complemented with two real case studies describing industrial applications of ML as defense against cyber-threats.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.3390/info14010053
Tool Support for Improving Software Quality in Machine Learning Programs
  • Jan 16, 2023
  • Information
  • Kwok Sun Cheng + 3 more

Machine learning (ML) techniques discover knowledge from large amounts of data. Modeling in ML is becoming essential to software systems in practice. The accuracy and efficiency of ML models have been focused on ML research communities, while there is less attention on validating the qualities of ML models. Validating ML applications is a challenging and time-consuming process for developers since prediction accuracy heavily relies on generated models. ML applications are written by relatively more data-driven programming based on the black box of ML frameworks. All of the datasets and the ML application need to be individually investigated. Thus, the ML validation tasks take a lot of time and effort. To address this limitation, we present a novel quality validation technique that increases the reliability for ML models and applications, called MLVal. Our approach helps developers inspect the training data and the generated features for the ML model. A data validation technique is important and beneficial to software quality since the quality of the input data affects speed and accuracy for training and inference. Inspired by software debugging/validation for reproducing the potential reported bugs, MLVal takes as input an ML application and its training datasets to build the ML models, helping ML application developers easily reproduce and understand anomalies in the ML application. We have implemented an Eclipse plugin for MLVal that allows developers to validate the prediction behavior of their ML applications, the ML model, and the training data on the Eclipse IDE. In our evaluation, we used 23,500 documents in the bioengineering research domain. We assessed the ability of the MLVal validation technique to effectively help ML application developers: (1) investigate the connection between the produced features and the labels in the training model, and (2) detect errors early to secure the quality of models from better data. Our approach reduces the cost of engineering efforts to validate problems, improving data-centric workflows of the ML application development.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.1051/0004-6361/202245205
Adding gamma-ray polarimetry to the multi-messenger era
  • Jan 1, 2023
  • Astronomy & Astrophysics
  • Merlin Kole + 3 more

Context. The last decade has seen the emergence of two new fields within astrophysics: gamma-ray polarimetry and gravitational wave (GW) astronomy. The former, which aims to measure the polarization of gamma rays in the energy range of tens to hundreds of keV, from astrophysical sources, saw the launch of the first dedicated polarimeters such as GAP and POLAR. Due to both a large scientific interest as well as their large signal-to-noise ratios, gamma-ray bursts (GRBs) are the primary source of interest of the first generation of polarimeters. Polarization measurements are theorized to provide a unique probe of the mechanisms at play in these extreme phenomena. On the other hand, GW astronomy started with the detection of the first black hole mergers by LIGO in 2015, followed by the first multi-messenger detection in 2017. Aims. While the potential of the two individual fields has been discussed in detail in the literature, the potential for joint observations has thus far been ignored. In this article, we aim to define how GW observations can best contribute to gamma-ray polarimetry and study the scientific potential of joint analyses. In addition, we aim to provide predictions on feasibility of such joint measurements in the near future. Methods. We study which GW observables can be combined with measurements from gamma-ray polarimetry to improve the discriminating power regarding GRB emission models. We then provide forecasts for the joint detection capabilities of current and future GW detectors and polarimeters. Results. Our results show that by adding GW data to polarimetry, a single precise joint detection would allow for the majority of emission models to be ruled out. We show that in the coming years, joint detections between GW and gamma-ray polarimeters might already be possible. Although these would allow one to constrain part of the model space, the probability of highly constraining joint detections will remain small in the near future. However, the scientific merit held by even a single such measurement makes it important to pursue such an endeavour. Furthermore, we show that using the next generation of GW detectors, such as the Einstein Telescope, joint detections for which GW data can better complement the polarization data become possible.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 168
  • 10.1109/access.2019.2947542
A Review of Fog Computing and Machine Learning: Concepts, Applications, Challenges, and Open Issues
  • Jan 1, 2019
  • IEEE Access
  • Karrar Hameed Abdulkareem + 7 more

Systems based on fog computing produce massive amounts of data; accordingly, an increasing number of fog computing apps and services are emerging. In addition, machine learning (ML), which is an essential area, has gained considerable progress in various research domains, including robotics, neuromorphic computing, computer graphics, natural language processing (NLP), decision-making, and speech recognition. Several researches have been proposed that study how to employ ML to settle fog computing problems. In recent years, an increasing trend has been observed in adopting ML to enhance fog computing applications and provide fog services, like efficient resource management, security, mitigating latency and energy consumption, and traffic modeling. Based on our understanding and knowledge, there is no study has yet investigated the role of ML in the fog computing paradigm. Accordingly, the current research shed light on presenting an overview of the ML functions in fog computing area. The ML application for fog computing become strong end-user and high layers services to gain profound analytics and more smart responses for needed tasks. We present a comprehensive review to underline the latest improvements in ML techniques that are associated with three aspects of fog computing: management of resource, accuracy, and security. The role of ML in edge computing is also highlighted. Moreover, other perspectives related to the ML domain, such as types of application support, technique, and dataset are provided. Lastly, research challenges and open issues are discussed.

  • Research Article
  • Cite Count Icon 40
  • 10.1103/physrevd.101.104028
Gravitomagnetic tidal resonance in neutron-star binary inspirals
  • May 15, 2020
  • Physical Review D
  • Eric Poisson

A compact binary system implicating at least one rotating neutron star undergoes gravitomagnetic tidal resonances as it inspirals toward its final merger. These have a dynamical impact on the phasing of the emitted gravitational waves. The resonances are produced by the inertial modes of vibration of the rotating star. Four distinct modes are involved, and the resonances occur within the frequency band of interferometric gravitational-wave detectors when the star spins at a frequency that lies within this band. The resonances are driven by the gravitomagnetic tidal field created by the companion star; this is described by a post-Newtonian vector potential, which is produced by the mass currents associated with the orbital motion. These resonances were identified previously by Flanagan and Racine [Phys. Rev. D 75, 044001 (2007)], but these authors accounted only for the response of a single mode, the r-mode, a special case of inertial modes. All four relevant modes are included in the analysis presented in this paper. The total accumulated gravitational-wave phase shift is shown to range from approximately $10^{-2}$ radians when the spin and orbital angular momenta are aligned, to approximately $10^{-1}$ radians when they are anti-aligned. Such phase shifts will become measurable in the coming decades with the deployment of the next generation of gravitational-wave detectors (Cosmic Explorer, Einstein Telescope); they might even come to light within this decade, thanks to planned improvements in the current detectors. With good constraints on the binary masses and spins gathered from the inspiral waveform, the phase shifts deliver information regarding the internal structure of the rotating neutron star, and therefore on the equation of state of nuclear matter.

  • Research Article
  • Cite Count Icon 7
  • 10.1021/acs.est.4c11888
Machine Learning Advancements and Strategies in Microplastic and Nanoplastic Detection.
  • Apr 28, 2025
  • Environmental science & technology
  • Lifang Xie + 4 more

Microplastics (MPs) and nanoplastics (NPs) present formidable global environmental challenges with serious risks to human health and ecosystem sustainability. Despite their significance, the accurate assessment of environmental MP and NP pollution remains hindered by limitations in existing detection technologies, such as low resolution, substantial data volumes, and prolonged imaging times. Machine learning (ML) provides a promising pathway to overcome these challenges by enabling efficient data processing and complex pattern recognition. This systematic Review aims to address these gaps by examining the role of ML techniques combined with spectroscopy in improving the detection and characterization of NPs. We focused on the application of ML and key tools in MP and NP detection, categorizing the literature into key aspects: (1) Developing tailored strategies for constructing ML models to optimize plastic detection while expanding monitoring capabilities. Emphasis is placed on harnessing the unique molecular fingerprinting capabilities offered by spectroscopy, including both infrared (IR) and Raman spectra. (2) Providing an in-depth analysis of the challenges and issues encountered by current ML approaches for NP detection. This Review highlights the critical role of ML in advancing environmental monitoring and improving our further, deeper investigation of the widespread presence of NPs. By identifying current key challenges, this Review provides valuable insights for future direction in environmental management and public health protection.

  • Book Chapter
  • 10.2174/9798898811624125010013
Utilization of Machine Learning in Disease Anticipation and Prevention
  • Nov 23, 2025
  • Ashish Verma + 4 more

Predictive and preventative strategies for the disease have been transformed through ML (machine learning), which has created opportunities for earlier diagnosis and personalized care that were not previously available in healthcare. This chapter summarizes the role of ML in healthcare, emphasizing its importance in predicting diseases and preventing their onset. The key algorithms, including decision trees, neural networks, and support vector machines, and the fundamentals of ML (supervised, unsupervised, and reinforcement learning) are covered. It covers different data sources for ML applications, including genomic data, wearables, and public health data. Data preprocessing and feature engineering steps, such as cleaning, selection, and transformation, are also covered. The chapter delves into model training, evaluation metrics, and challenges such as handling imbalanced data, overfitting, and underfitting. It highlights personalized disease prediction models and risk factor assessments, which can show how individual health data can lead to more tailored predictions. The role of ML in preventive healthcare is also explored, with a focus on early intervention approaches and lifestyle change recommendations. It further explains the significant implementation of ML for disease prediction, including early detection of kidney diseases, infectious outbreaks, and mental health disorders. Finally, this chapter also discusses the challenges and limitations of the implementation of ML in healthcare.

  • Research Article
  • 10.1088/2516-1091/ae0bd3
Machine and deep learning applied to medical microwave imaging: a scoping review from reconstruction to classification
  • Oct 1, 2025
  • Progress in Biomedical Engineering
  • Tiago M M Silva + 2 more

Microwave imaging (MWI) is a promising modality due to its non-invasive nature and lower cost compared to other medical imaging techniques. These characteristics make it a potential alternative to traditional imaging techniques. It has various medical applications, particularly explored in breast and brain imaging. Machine learning (ML) has also been increasingly used for medical applications. This paper provides a scoping review of the role of ML in MWI, focusing on two key areas: image reconstruction and classification. The reconstruction section discusses various ML algorithms used to enhance image quality, highlighting methods such as convolutional neural network and support vector machine. The classification section delves into the application of ML for distinguishing between different tissue types, including applications in breast cancer detection and neurological disorder classification. By analyzing the latest studies and methodologies, this review addresses the current state of ML-enhanced MWI and sheds light on its potential for clinical applications.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 17
  • 10.3390/asi4030040
Applications of Machine Learning and High-Performance Computing in the Era of COVID-19
  • Jun 30, 2021
  • Applied System Innovation
  • Abdul Majeed + 1 more

During the ongoing pandemic of the novel coronavirus disease 2019 (COVID-19), latest technologies such as artificial intelligence (AI), blockchain, learning paradigms (machine, deep, smart, few short, extreme learning, etc.), high-performance computing (HPC), Internet of Medical Things (IoMT), and Industry 4.0 have played a vital role. These technologies helped to contain the disease’s spread by predicting contaminated people/places, as well as forecasting future trends. In this article, we provide insights into the applications of machine learning (ML) and high-performance computing (HPC) in the era of COVID-19. We discuss the person-specific data that are being collected to lower the COVID-19 spread and highlight the remarkable opportunities it provides for knowledge extraction leveraging low-cost ML and HPC techniques. We demonstrate the role of ML and HPC in the context of the COVID-19 era with the successful implementation or proposition in three contexts: (i) ML and HPC use in the data life cycle, (ii) ML and HPC use in analytics on COVID-19 data, and (iii) the general-purpose applications of both techniques in COVID-19’s arena. In addition, we discuss the privacy and security issues and architecture of the prototype system to demonstrate the proposed research. Finally, we discuss the challenges of the available data and highlight the issues that hinder the applicability of ML and HPC solutions on it.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon