Articles published on Artifact rejection
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
399 Search results
Sort by Recency
- Research Article
- 10.1016/j.earlhumdev.2025.106387
- Nov 1, 2025
- Early human development
- Jiaqi Li + 3 more
Alterations in EEG functional connectivity in preterm infants: A systematic review.
- Research Article
- 10.1016/j.biopsycho.2025.109130
- Oct 1, 2025
- Biological psychology
- Yaqi Yang + 7 more
Deviant functional connectivity patterns in the EEG related to developmental dyslexia and their potential use for screening.
- Research Article
- 10.1088/1741-2552/ae10e0
- Oct 1, 2025
- Journal of Neural Engineering
- Yuanyi Ding + 10 more
Objective.Accurate detection and classification of high-frequency oscillations (HFOs) in electroencephalography (EEG) recordings have become increasingly important for identifying epileptogenic zones in patients with drug-resistant epilepsy. However, few open-source platforms offer both state-of-the-art computational methods and user-friendly interfaces to support practical clinical use.Approach.We present PyHFO 2.0, an enhanced open-source, Python-based platform that extends previous work by incorporating a more comprehensive set of detection methods and deep learning (DL) tools for HFO analysis. The platform now supports three commonly used detectors: short-term energy, Montreal Neurological Institute, and a newly integrated Hilbert transform-based detector. For HFO classification, PyHFO 2.0 includes DL models for artifact rejection, spike HFO detection, and identification of epileptogenic HFOs. These models are integrated with the Hugging Face ecosystem for automatic loading and can be replaced with custom-trained alternatives. An interactive annotation module enables clinicians and researchers to inspect, verify, and reclassify events.Main results.All detection and classification modules were evaluated using clinical EEG datasets, supporting the applicability of the platform in both research and translational settings. Validation across multiple datasets demonstrated close alignment with expert-labeled annotations and standard tools such as RIPPLELAB.Significance.PyHFO 2.0 aims to simplify the use of computational neuroscience tools in both research and clinical environments by combining methodological rigor with a user-friendly graphical interface. Its scalable architecture and model integration capabilities support a range of applications in biomarker discovery, epilepsy diagnostics, and clinical decision support, bridging advanced computation and practical usability.
- Research Article
- 10.1177/13872877251379072
- Sep 26, 2025
- Journal of Alzheimer's disease : JAD
- Huiqing Hou + 4 more
BackgroundElectroencephalography (EEG) microstate analysis has emerged as a key methodology for elucidating the brain's dynamic repertoire, providing a pivotal neurophysiological framework for the identification of cognitive impairment.ObjectiveThis study was aimed to analyze the EEG microstates in Alzheimer's disease (AD) based on a publicly accessible EEG dataset and additionally using support vector machine models to separate the healthy controls and AD patients.MethodsThis scalp EEG dataset from an open-source included 36 AD patients and 29 healthy controls. All EEG data underwent standardized preprocessing incorporating a 0.5-35 Hz band-pass filter and automated artifact rejection. The EEG data were subsequently partitioned into 20-s segments for microstate analysis, generating temporally aligned sequences characterized by canonical four-class spatial configurations.ResultsA total of 24 features were extracted from microstate sequences, including coverage, mean duration, occurrence, and transition probabilities between each two microstates. The statistical testing results indicated that there were significant differences in 21 features between AD patients and healthy controls. Based on the features of statistical significance, we implemented support vector machine models to distinguish the AD patients from the healthy controls, achieving an averaged classification accuracy of 75.8% in a 5-fold cross-subject validation via 10 times repeated random trials.ConclusionsThe EEG microstate analysis methods was a non-invasive, convenient, and efficient technical pathway and could be adopted for identifying AD.
- Research Article
- 10.1016/j.neuroimage.2025.121304
- Aug 1, 2025
- NeuroImage
- Guanghui Zhang + 1 more
Assessing the impact of artifact correction and artifact rejection on the performance of SVM- and LDA-based decoding of EEG signals.
- Research Article
- 10.7759/cureus.87496
- Jul 8, 2025
- Cureus
- Abhijeet Satani + 3 more
Background and objectiveExtensive use of social media raises concerns regarding its psychological and neurophysiological impact. Although behavioral effects have been the focus of earlier research, there are scarce empirical data addressing the degree to which real-time brain activity alters with social media use. This research aimed to examine the neurocognitive impact of social media usage by assessing brainwave activity via electroencephalography (EEG) to determine specific patterns of neural engagement as well as cognitive/emotional responses.MethodsEEG recordings were obtained from 100 participants with a 24-channel system based on the 10-20 international standard. Participants were healthy adults aged 18-45 years (mean age: 27.4 years), including 52 females and 48 males. Individuals with a history of neurological or psychiatric disorders were excluded. Data were preprocessed with band-pass and notch filtering, artifact rejection by independent component analysis (ICA), common average referencing, epoching, and downsampling. Participants used social media for 30-minute periods, during which neural activity in five frequency bands (Delta, Theta, Alpha, Beta, Gamma) was recorded and analyzed in terms of user interactions and content type.ResultsSocial media use caused marked alterations in brainwave activity. Alpha waves declined during engagement, especially with emotionally charged content, which suggests cognitive load and excitation. Beta and Gamma waves are heightened during active interaction and continue after engagement, which may indicate extended cognitive excitation and emotional engagement. Theta and Delta waves increased slightly during passive surfing or extended use, which could indicate introspection and mental exhaustion. Regional examination identified Beta/Gamma predominance in prefrontal and occipital cortices in decision-making and viewing of visual content, and Beta/Theta in parietal cortex during multitasking between platforms.ConclusionsOur findings show that social media engages in brain reward pathways akin to those seen in addictive behavior, with extended Beta and Gamma activity having the potential to interfere with emotional regulation and attention. These neurophysiological consequences, especially delayed Alpha recovery and increased Delta activity, could bring to the fore new concerns regarding digital fatigue and mental health. The research indicates the importance of platform design interventions and additional longitudinal investigations.
- Research Article
- 10.3791/68350
- Jun 6, 2025
- Journal of visualized experiments : JoVE
- Lucas Murrins Marques + 9 more
Electroencephalography (EEG) is a crucial tool in neuroscience research and clinical applications, but raw EEG data often contain noise and artifacts that compromise signal quality. To address this, we developed PIPEMAT-RS, a standardized MATLAB-based preprocessing pipeline for resting-state EEG data. PIPEMAT-RS follows a structured seven-step workflow: file format conversion, EEG montage configuration, downsampling, filtering, artifact rejection, independent component analysis (ICA), and ICLabel classification for automated artifact removal. This protocol enhances EEG data quality by minimizing human intervention while maintaining high accuracy in artifact rejection. It was validated using multiple datasets, demonstrating its robustness in improving signal integrity. PIPEMAT-RS provides a systematic approach that facilitates reproducibility and reliability in EEG studies, aligning with commonly adopted practices in the field and offering a clearly documented structure that can complement existing pipelines. By standardizing EEG preprocessing, PIPEMAT-RS facilitates neurophysiological research and clinical applications, allowing for more accurate interpretations of resting-state brain activity and its associations with neurological and psychiatric conditions.
- Research Article
- 10.1093/sleep/zsaf090.1050
- May 19, 2025
- SLEEP
- Caitlin Carroll + 10 more
Abstract Introduction Adolescence is marked by profound changes in sleep-wake physiology, driven by recognized neurodevelopmental shifts impacting macro- and micro-architecture of the electroencephalogram (EEG) during sleep. Ensuring reliable and high-quality data is critical for studying these developmental processes, particularly in multicenter settings. In this ancillary study to the Molecular Transducers of Physical Activity Consortium (MoTrPAC), we aimed to assess the reliability and data quality of sleep EEG by evaluating consistency and concordance in data collected across two sites, using identical platforms. Methods 113 participants (Mage=14.5±2.5 years, 55% Female) completed the Pediatric MoTrPAC study and underwent standard overnight polysomnography (PSG) with high-density EEG (hdEEG; 128 channels) at two sleep laboratory sites: UCI Sleep Laboratory (n=66) or the Research Center for Exercise Medicine and Sleep (RCEMS/PERC; n=47). Both sites used the same equipment and protocols (Natus Neurology equipment, Waveguard original EEG cap, and Natus SleepWorks software). All recordings were scored by a boarded sleep medicine physician, using 30-second epochs. EEG data underwent preprocessing, artifact rejection, and segmentation into concatenated NREM epochs. Spectral power was calculated for frequency bands using a multitaper approach. Threshold-free cluster enhancement (TFCE) was used for multiple comparisons correction across topography. Results Total sleep time and time spent in each sleep stage did not differ between sites. Similarly, we observed no site-based differences in absolute or relative spectral power across all frequency bands. Consistent with prior work, age-related decreases in absolute power were observed at both sites for slow wave activity, slow oscillations, delta, theta, sigma, and beta frequency bands (all p< 0.05, TFCE corrected). Conversely, relative power in higher frequency bands including sigma, beta, and gamma increased with age (all p< 0.05, TFCE corrected). These patterns were observed at both sites with no significant differences between sites. Conclusion This study demonstrates the reliability of PSG with hdEEG data collected at two sites using identical platforms. The findings align with previous studies showing age-related changes in spectral power, reflecting neurodevelopmental processes. These results validate the use of PSG with hdEEG for multicenter studies and support combining data across sites for future analyses advancing our understanding of sleep neurophysiology in adolescence. Support (if any) R01HL153807,T32AG033534,U01AR071158,PERC Systems Biology Program
- Research Article
- 10.14311/ap.2025.65.0050
- Mar 6, 2025
- Acta Polytechnica
- Sergey Karpov
We present a simple web-based tool, STDWeb, for a quick-look photometry and transient detection in astronomical images. It tries to implement a self-consistent and mostly automatic data analysis workflow that would work on any image uploaded to it, allowing to perform basic interactive masking, object detection, astrometric calibration of the image, and building the photometric solution based on a selection of catalogues and supported filters, optionally including the colour term and positionally varying zero point. It also allows you to do image subtraction using either user-provided or automatically downloaded template images, and do a forced photometry for a specified target in either original or difference images, as well as transient detection with basic rejection of artefacts. The tool may be easily deployed allowing its integration into the infrastructure of robotic telescopes or data archives for an effortless analysis of their images.
- Research Article
- 10.1080/14992027.2025.2465767
- Mar 5, 2025
- International Journal of Audiology
- Aoi A Hunsaker + 4 more
Objective To reduce the amplitude of stimulus artefacts present in bone conduction auditory brainstem response (BC ABR) measurements. Design Electromagnetic shielding was applied to the surface of a clinical BC transducer. A foam pad was also placed on the shielded mastoid-contacting plate of the transducer. Acoustic impacts of these modifications were evaluated using an artificial mastoid. Unmodified and modified (shielding with pad) transducers were then used to elicit BC ABRs in adults and infants. Stimulus artefact amplitudes were compared across transducers. Study sample Six adults (24–42 years old) and 13 typically developing infants (mean age 48.77 days) with no sensorineural hearing loss. Results Shielding alone slightly decreased acoustic transducer output above approximately 1000 Hz. The addition of a foam pad largely negated this loss, while lower-frequency (500–1000 Hz) acoustic transducer output was slightly increased. The modified transducer produced significantly less stimulus artefact, although variation across subjects was also evident. In a clinical setting, Wave V was detected at similar rates for both transducers. Conclusion While artefact was not eliminated, direct attenuation of artefact amplitude (versus software-based mitigation strategies) could simplify BC ABR and other evoked potential measurement protocols and support more stringent artefact rejection criteria to yield more informative recordings.
- Research Article
1
- 10.1016/j.jneumeth.2024.110350
- Mar 1, 2025
- Journal of neuroscience methods
- Ludovic Gardy + 5 more
Detecting fast-ripples on both micro- and macro-electrodes in epilepsy: A wavelet-based CNN detector.
- Research Article
- 10.1051/0004-6361/202452011
- Mar 1, 2025
- Astronomy & Astrophysics
- S Karpov + 2 more
Context. Thirty years after the discovery of brown dwarfs, the search for these objects continues, particularly in the vicinity of the Sun. Objects near the Sun are characterized by large proper motions, making them be seen as fast-moving objects. While the Gaia DR3 catalog is a comprehensive source of proper motions, it lacks the depth needed for discovering fainter objects. Modern multi-epoch surveys, with their greater depth, offer a new opportunity to systematically search for ultracool dwarfs. Aims. The study aims to systematically search for high-proper-motion objects using the newly released catalog of epochal Wide-field Infrared Survey Explorer (WISE) data in order to identify new brown dwarf candidates in the solar neighborhood, estimate their spectral types, distances, and spatial velocities. Methods. We used recently released unTimely catalog of epochal detections in unWISE coadds to search for objects with high proper motions using a simple motion detection algorithm, combined with machine-learning-based artifact rejection routine. This method was used to identify objects with proper motions exceeding approximately 0.3 arcseconds per year. The identified objects were then cross-referenced with data from other large-scale sky surveys to further analyze their characteristics. Results. The search yielded 21 885 moving objects with significant proper motions, 258 of which had not been previously published. All except 6 of them are compatible with being ultracool dwarfs. Among these, at least 33 were identified as most promising new T dwarf candidates, with estimated distances of closer than about 40 parsecs, and effective temperatures of less than 1300 K.
- Research Article
- 10.1101/2025.02.22.639684
- Feb 25, 2025
- bioRxiv
- Guanghui Zhang + 1 more
Numerous studies have demonstrated that eyeblinks and other large artifacts can decrease the signal-to-noise ratio of EEG data, resulting in decreased statistical power for conventional univariate analyses. However, it is not clear whether eliminating these artifacts during preprocessing enhances the performance of multivariate pattern analysis (MVPA; decoding), especially given that artifact rejection reduces the number of trials available for training the decoder. This study aimed to evaluate the impact of artifact-minimization approaches on the decoding performance of support vector machines. Independent component analysis (ICA) was used to correct ocular artifacts, and artifact rejection was used to discard trials with large voltage deflections from other sources (e.g., muscle artifacts). We assessed decoding performance in relatively simple binary classification tasks using data from seven commonly-used event-related potential paradigms (N170, mismatch negativity, N2pc, P3b, N400, lateralized readiness potential, and error-related negativity), as well as more challenging multi-way decoding tasks, including stimulus location and stimulus orientation. The results indicated that the combination of artifact correction and rejection did not improve decoding performance in the vast majority of cases. However, artifact correction may still be essential to minimize artifact-related confounds that might artificially inflate decoding accuracy. Researchers who are decoding EEG data from paradigms, populations, and recording setups that are similar to those examined here may benefit from our recommendations to optimize decoding performance and avoid incorrect conclusions.
- Research Article
- 10.3390/brainsci14121272
- Dec 18, 2024
- Brain sciences
- András Adolf + 4 more
Background/Objectives: Accurately classifying Electroencephalography (EEG) signals is essential for the effective operation of Brain-Computer Interfaces (BCI), which is needed for reliable neurorehabilitation applications. However, many factors in the processing pipeline can influence classification performance. The objective of this study is to assess the effects of different processing steps on classification accuracy in EEG-based BCI systems. Methods: This study explores the impact of various processing techniques and stages, including the FASTER algorithm for artifact rejection (AR), frequency filtering, transfer learning, and cropped training. The Physionet dataset, consisting of four motor imagery classes, was used as input due to its relatively large number of subjects. The raw EEG was tested with EEGNet and Shallow ConvNet. To examine the impact of adding a spatial dimension to the input data, we also used the Multi-branch Conv3D Net and developed two new models, Conv2D Net and Conv3D Net. Results: Our analysis showed that classification accuracy can be affected by many factors at every stage. Applying the AR method, for instance, can either enhance or degrade classification performance, depending on the subject and the specific network architecture. Transfer learning was effective in improving the performance of all networks for both raw and artifact-rejected data. However, the improvement in classification accuracy for artifact-rejected data was less pronounced compared to unfiltered data, resulting in reduced precision. For instance, the best classifier achieved 46.1% accuracy on unfiltered data, which increased to 63.5% with transfer learning. In the filtered case, accuracy rose from 45.5% to only 55.9% when transfer learning was applied. An unexpected outcome regarding frequency filtering was observed: networks demonstrated better classification performance when focusing on lower-frequency components. Higher frequency ranges were more discriminative for EEGNet and Shallow ConvNet, but only when cropped training was applied. Conclusions: The findings of this study highlight the complex interaction between processing techniques and neural network performance, emphasizing the necessity for customized processing approaches tailored to specific subjects and network architectures.
- Research Article
4
- 10.1088/1741-2552/ad788e
- Dec 1, 2024
- Journal of Neural Engineering
- Taeho Kang + 2 more
Objective.In this paper, we conduct a detailed investigation on the effect of independent component (IC)-based noise rejection methods in neural network classifier-based decoding of electroencephalography (EEG) data in different task datasets.Approach.We apply a pipeline matrix of two popular different independent component (IC) decomposition methods (Infomax and Adaptive Mixture Independent Component Analysis (AMICA)) with three different component rejection strategies (none, ICLabel, and multiple artifact rejection algorithm [MARA]) on three different EEG datasets (motor imagery, long-term memory formation, and visual memory). We cross-validate processed data from each pipeline with three architectures commonly used for EEG classification (two convolutional neural networks and one long short-term memory-based model. We compare decoding performances on within-participant and within-dataset levels.Main Results.Our results show that the benefit from using IC-based noise rejection for decoding analyses is at best minor, as component-rejected data did not show consistently better performance than data without rejections-especially given the significant computational resources required for independent component analysis (ICA) computations.Significance.With ever-growing emphasis on transparency and reproducibility, as well as the obvious benefits arising from streamlined processing of large-scale datasets, there has been an increased interest in automated methods for pre-processing EEG data. One prominent part of such pre-processing pipelines consists of identifying and potentially removing artifacts arising from extraneous sources. This is typically done via IC-based correction for which numerous methods have been proposed, differing not only in the decomposition of the raw data into ICs, but also in how they reject the computed ICs. While the benefits of these methods are well established in univariate statistical analyses, it is unclear whether they help in multivariate scenarios, and specifically in neural network-based decoding studies. As computational costs for pre-processing large-scale datasets are considerable, it is important to consider whether the trade-off between model performance and available resources is worth the effort.
- Research Article
- 10.1088/2057-1976/ad7e2d
- Oct 4, 2024
- Biomedical Physics & Engineering Express
- Amna Ghani + 3 more
Automation is revamping our preprocessing pipelines, and accelerating the delivery of personalized digital medicine. It improves efficiency, reduces costs, and allows clinicians to treat patients without significant delays. However, the influx of multimodal data highlights the need to protect sensitive information, such as clinical data, and safeguard data fidelity. One of the neuroimaging modalities that produces large amounts of time-series data is Electroencephalography (EEG). It captures the neural dynamics in a task or resting brain state with high temporal resolution. EEG electrodes placed on the scalp acquire electrical activity from the brain. These electrical potentials attenuate as they cross multiple layers of brain tissue and fluid yielding relatively weaker signals than noise-low signal-to-noise ratio. EEG signals are further distorted by internal physiological artifacts, such as eye movements (EOG) or heartbeat (ECG), and external noise, such as line noise (50 Hz). EOG artifacts, due to their proximity to the frontal brain regions, are particularly challenging to eliminate. Therefore, a widely used EOG rejection method, independent component analysis (ICA), demands manual inspection of the marked EOG components before they are rejected from the EEG data. We underscore the inaccuracy of automatized ICA rejection and provide an auxiliary algorithm-Second Layer Inspection for EOG (SLOG) in the clinical environment. SLOG based on spatial and temporal patterns of eye movements, re-examines the already marked EOG artifacts and confirms no EEG-related activity is mistakenly eliminated in this artifact rejection step. SLOG achieved a 99% precision rate on the simulated dataset while 85% precision on the real EEG dataset. One of the primary considerations for cloud-based applications is operational costs, including computing power. Algorithms like SLOG allow us to maintain data fidelity and precision without overloading the cloud platforms and maxing out our budgets.
- Research Article
4
- 10.1109/jbhi.2024.3415479
- Oct 1, 2024
- IEEE journal of biomedical and health informatics
- Andreas Tzavelis + 17 more
Cough is an important symptom in children with acute and chronic respiratory disease. Daily cough is common in Cystic Fibrosis (CF) and increased cough is a symptom of pulmonary exacerbation. To date, cough assessment is primarily subjective in clinical practice and research. Attempts to develop objective, automatic cough counting tools have faced reliability issues in noisy environments and practical barriers limiting long-term use. This single-center pilot study evaluated usability, acceptability and performance of a mechanoacoustic sensor (MAS), previously used for cough classification in adults, in 36 children with CF over brief and multi-day periods in four cohorts. Children whose health was at baseline and who had symptoms of pulmonary exacerbation were included. We trained, validated, and deployed custom deep learning algorithms for accurate cough detection and classification from other vocalization or artifacts with an overall area under the receiver-operator characteristic curve (AUROC) of 0.96 and average precision (AP) of 0.93. Child and parent feedback led to a redesign of the MAS towards a smaller, more discreet device acceptable for daily use in children. Additional improvements optimized power efficiency and data management. The MAS's ability to objectively measure cough and other physiologic signals across clinic, hospital, and home settings is demonstrated, particularly aided by an AUROC of 0.97 and AP of 0.96 for motion artifact rejection. Examples of cough frequency and physiologic parameter correlations with participant-reported outcomes and clinical measurements for individual patients are presented. The MAS is a promising tool in objective longitudinal evaluation of cough in children with CF.
- Research Article
1
- 10.1016/j.ijpsycho.2024.112441
- Sep 17, 2024
- International Journal of Psychophysiology
- Brittany A Larsen + 1 more
EEG might be better left alone, but ERPs must be attended to: Optimizing the late positive potential preprocessing pipeline
- Research Article
4
- 10.1016/j.ebiom.2024.105259
- Aug 1, 2024
- eBioMedicine
- Philipp Bomatter + 4 more
Machine learning of brain-specific biomarkers from EEG
- Research Article
1
- 10.1088/1741-2552/ad5c04
- Jul 16, 2024
- Journal of Neural Engineering
- Simon Marchant + 7 more
Objective. Automated detection of artefact in stimulus-evoked electroencephalographic (EEG) data recorded in neonates will improve the reproducibility and speed of analysis in clinical research compared with manual identification of artefact. Some studies use very short, single-channel epochs of EEG data with little recorded EEG per infant—for example because the clinical vulnerability of the infants limits access for recording. Current artefact-detection methods that perform well on adult data and resting-state and multi-channel data in infants are not suitable for this application. The aim of this study was to create and test an automated method of detecting artefact in single-channel 1500 ms epochs of infant EEG. Approach. A total of 410 epochs of EEG were used, collected from 160 infants of 28–43 weeks postmenstrual age. This dataset—which was balanced to include epochs of background activity and responses to visual, auditory, tactile and noxious stimuli—was presented to seven independent raters, who independently labelled the epochs according to whether or not they were able to visually identify artefacts. The data was split into a training set (340 epochs) and an independent test set (70 epochs). A random forest model was trained to identify epochs as either artefact or not artefact. Main results. This model performs well, achieving a balanced accuracy of 0.81, which is as good as manual review of data. Accuracy was not significantly related to the infant age or type of stimulus. Significance. This method provides an objective tool for automated artefact rejection for short epoch, single-channel EEG in neonates and could increase the utility of EEG in neonates in both the clinical and research setting.