Articles published on Noise Levels
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
47929 Search results
Sort by Recency
- New
- Research Article
- 10.1073/pnas.2519032123
- Feb 13, 2026
- Proceedings of the National Academy of Sciences
- Lucy Liu + 3 more
In crowded environments, individuals must navigate around other occupants to reach their destinations. Understanding and controlling traffic flows in these spaces is relevant for coordinating robot swarms and designing infrastructure for dense populations. Here, we use simulations, theory, and experiments to study how adding stochasticity to agent motion can reduce traffic jams and help agents travel more quickly to prescribed goals. A computational approach reveals the collective behavior. Above a critical noise level, large jams do not persist. From this observation, we analytically approximate the swarm's goal attainment rate, which allows us to solve for the agent density and noise level that maximize the goals reached. Robotic experiments corroborate the behaviors observed in our simulated and theoretical results. Finally, we compare simple, local navigation approaches with a sophisticated but computationally costly central planner. A simple reactive scheme performs well up to moderate densities and is far more computationally efficient than a planner, motivating further research into robust, decentralized navigation methods for crowded environments. By integrating ideas from physics and engineering using simulations, theory, and experiments, our work identifies new directions for emergent traffic research.
- New
- Research Article
- 10.2514/1.j066232
- Feb 12, 2026
- AIAA Journal
- Zhe Yang + 3 more
Distributed propulsion is a promising concept for future urban air mobility aircraft, enabling lower emissions, higher efficiency, and improved maneuverability. However, aerodynamic interactions between propellers and wings may induce additional noise. Large-eddy simulations for a tractor-configured distributed propulsion system are conducted to gain further insights into the tip-vortex-impingement noise on the wing leading edge. The Ffowcs-Williams and Hawkings method is employed to determine the noise emission to the far field. Comparison of the simulation results with experimental and numerical reference data demonstrates good accuracy of the numerical methods. Results reveal that propeller tip vortices impinging on the wing leading edge generate pressure pulses at the blade-passing frequency. Acoustic footprints are extracted from the hydrodynamic pressure perturbations of the wing near field using a surface-based noise source localization method, which identifies tip-vortex-impingement noise and trailing-edge noise as dominant sources on the installed wing. Although propeller noise remains the primary contributor to overall emissions, propeller–wing interaction leads to a 1–3 dB increase in the noise levels. Both tip-vortex-impingement noise and the trailing-edge noise from the wing exhibit dipole-like directivity, radiating primarily normal to the freestream flow. This study highlights mechanisms and key areas for potential noise mitigation strategies in distributed propulsion systems.
- New
- Research Article
- 10.25205/1818-7900-2025-23-4-23-43
- Feb 12, 2026
- Vestnik NSU. Series: Information Technologies
- A V Gavrilov + 2 more
The automation of radiology services has significantly improved access to radiological imaging for accurate diagnosis of diseases and injuries. However, the expansion of radiological equipment, the adoption of telemedicine, and the integration of AI-powered clinical decision support systems necessitate upgrades to existing medical image storage and processing solutions. This article reviews modern compression methods for radiological images, which offer higher compression ratios, improved image quality, and faster encoding/decoding times compared to the standards defined by the DICOM specification. It is established that radiological images possess unique characteristics—such as high noise levels, locally symmetric regions (similar patches), and the presence of multiple sequential frames in a single study—which, when accounted for in compression algorithms, can enhance compression efficiency. Implementing advanced data compression approaches can increase the fault tolerance of high-load medical systems and reduce costs associated with the storage, transmission, and processing of diagnostic studies.
- New
- Research Article
- 10.5194/isprs-archives-xlviii-2-w12-2026-199-2026
- Feb 12, 2026
- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Richard Honti + 2 more
Abstract. Affordable 360° cameras combined with cloud- or desktop-based photogrammetry have made image-based documentation of complex interiors widely accessible. However, the metric reliability of point clouds reconstructed from 360° video remains inconsistently reported, particularly when compared to survey-grade terrestrial laser scanning (TLS). This study evaluates point-cloud generation from 360° videos captured along guided walking paths using consumer cameras, with high-overlap frame extraction and Structurefrom- Motion processing using the CupixVista solution. Comparisons are performed using cloud-to-cloud (C2C) distance analysis, control-point analysis (CPA), and visual inspection of noise, surface roughness, and edge definition, with efficiency also considered. Point clouds are registered to the TLS reference using a two-step rigid alignment (manual coarse registration followed by ICP (Iterative Closest Point)). C2C distances, point density, and CPA errors are computed in CloudCompare after spatial subsampling. Control points are defined as intersections of locally fitted planes to improve precision. Several indoor test scenes with varying geometry, scale, and visual characteristics are analyzed. Results show average deviations around 50 mm for C2C and 35 mm for CPA, with visual inspection confirming higher noise levels and deformation of sharp features in 360°-derived point clouds compared to TLS.
- New
- Research Article
- 10.1115/1.4071103
- Feb 11, 2026
- Journal of Thermal Science and Engineering Applications
- Zhao Li + 4 more
Abstract With the improvement of living standards, the demand for indoor comfort has increased. Fan coil systems have been found to cause uneven indoor temperature distribution, strong drafts, airflow short-circuiting, stagnant zones, and high noise levels during long-term use, thereby failing to ensure thermal comfort for occupants.This study investigates airflow characteristics and thermal comfort under low-velocity indoor air circulation. By adjusting fan coil and dehumidified displacement ventilation parameters, a more comfortable and healthier environment is achieved. The research focuses on a typical office fan coil unit with fresh air system. Environmental parameters were measured and recorded on an experimental platform in summer and winter to analyze the impact of different fan coil air parameters on indoor airflow and thermal comfort.The study shows that under low velocity air circulation(LVAC), short air jets cause temperature differences in the working area. However, the building envelope's insulation, airflow kinetics, and temperature field diffusion maintain stable indoor airflow and acceptable temperature uniformity. Compared to traditional systems, low-velocity systems reduce maximum air velocity by 62% in winter and 20% in summer, with a draft dissatisfaction coefficient (DR) of only 28%–61%. For long-term seated occupants, optimal temperature settings and well-insulated envelopes are required.
- New
- Research Article
- 10.1016/j.ecoenv.2026.119805
- Feb 10, 2026
- Ecotoxicology and environmental safety
- Ying Wang + 12 more
Integrating environmental factors and genetic variants in machine learning to assess occupational noise impact on health.
- New
- Research Article
- 10.1002/jgo2.70011
- Feb 8, 2026
- New Zealand Journal of Geology and Geophysics
- Sam B Thorpe‐Loversuch + 2 more
Seismic reflection profiling using shear waves provides constraints on the thickness of sediments beneath the Wellington Central Business District. Standard seismic methods were modified for urban environments where limited grassed areas are available and levels of cultural noise are high. Two‐way travel times on the stacked seismic section were converted to depth using a site‐specific velocity model based on direct velocity measurements from local downhole seismic surveys. Basement depths at Wellington Girls’ College and Waitangi Park are estimated to be 180 ± 19 and 166 ± 16 m, respectively. At Wellington Girls’ College, near‐surface sediments are horizontally layered, whereas at Waitangi Park, sediment layers are tilted in a zone of distributed, steep faulting. This fault zone is interpreted to be the onshore extension of the Aotea Fault that has been identified in Wellington Harbour on marine seismic reflection profiles. At Miramar Polo Ground, collocating a seismic survey with a logged borehole enabled us to benchmark our seismic interpretations against subsurface geology. Our new estimates of basement depth in the Wellington basin provide vital depth constraints for models that predict how shaking from earthquakes can vary across the city due to the basin's geometry.
- New
- Research Article
- 10.1038/s41598-026-37566-z
- Feb 7, 2026
- Scientific reports
- Alexandra L Day + 8 more
Recent innovations have made it possible to produce megalibraries, millions of structurally and compositionally distinct nanoparticles on a chip. These megalibraries yield vast volumes of data that are impossible to analyze manually, necessitating the development of automated tools. In previous work, we created a binary classification machine learning model to select quality nanoparticle images for downstream analysis. In this work, we show that adding a custom image processing step before training can produce significantly higher-performing models in a fraction of the time and make them more robust to different image noise levels and microscope acquisition settings. The image processing pipeline proposed here effectively cleans raw nanoparticle images, enhances key features, and allows us to use much lower resolution images and simpler neural network model architectures. These features result in higher performance and significant cost savings. Experiments demonstrate superior performance relative to baseline, including an 18.2% improvement in recall and a 13.1% increase in accuracy. Given the high cost of downstream analysis, it is critical to minimize false positives, and our best-performing model reaches a precision of 95.9% and a weighted F-score of 95.1% on an unseen test set. Additionally, model training time is reduced from hours to less than a minute. We also show that, using this custom image processing pipeline, model performance is significantly improved at lower pixel resolutions compared to downsizing alone. We expect that adopting this pipeline for AI-driven automated nanoparticle characterization will allow researchers to rapidly and accurately analyze much greater volumes of data, thereby accelerating materials discovery.
- New
- Research Article
- 10.55145/ajest.2026.05.01.011
- Feb 6, 2026
- Al-Salam Journal for Engineering and Technology
- Hamzah Abdulkhaleq Naji
The photovoltaic (PV) systems operate under Partial Shading Conditions (PSC), which severely degrade the PV system performance as number of local maxima appears in power-voltage (P-V) characteristic curve causing to generate the local maximum points in conventional control algorithms. Considering shading’s profound effect on energy production and system reliability, this paper recommends a new Maximum Power Point Tracking (MPPT) strategy based on the Wave Function Collapse (WFC) algorithm. The proposed approach adopts a probabilistic state-selection scheme, conducive to select the proper navigating steps in the search space for GMPP. The competition between exploration and exploitation is adaptively balanced in this algorithm, which enables it to perform stably at different irradiance levels and levels of noise. Extensive simulation results for a wide range of PSC scenarios show that, in comparison to the classical Perturb and Observe (P&O) and Particle Swarm Optimization (PSO), WFC-based MPPT approach enables a swifter tracking effort as well as a better energy harvesting efficiency. The results show that the proposed approach is strong in solving the drawbacks of metaheuristic and classical trackers under dynamic environmental environment limitations.
- New
- Research Article
- 10.1007/s11356-026-37443-2
- Feb 6, 2026
- Environmental science and pollution research international
- Saeed Shojaee Barjoee + 1 more
This study aimed to develop a comprehensive risk profile of four key occupational harmful factors-heat stress, inadequate illumination, noise, and respirable dust-within a representative ceramic manufacturing facility in Iran. Standardized instruments and protocols were used to assess four physical harmful factors. Dust concentration was measured via NIOSH 0600 using SKC pumps and nylon cyclones. Noise levels were recorded with a type 2 sound level meter (Extech 407732). Illuminance was measured with a GM1040 lux meter at a height of 0.85m, and heat stress was evaluated using a wet-bulb globe temperature (WBGT) meter. The risk ratio (RR) was calculated for each harmful factor as a single risk index. An integrated risk assessment followed, incorporating RR values, the number of exposed workers, and exposure duration. Prioritization of harmful factors and similar exposure groups (SEGs) was performed using the Pareto principle. The findings revealed that the average levels of noise, illumination, respirable dust, and temperature in the studied ceramic industry were 82.88dB(A), 114.83lx, 4.15mg/m3, and 21.01°C, respectively. The RR matrix analysis identified respirable dust exposure as a high-risk factor, with a prioritization index exceeding 386%. In comparison, noise was classified as a medium-risk factor, with priority levels ranging from 321 to 386%. In contrast, poor illumination and heat stress were categorized as low-risk factors (integrated risk assessment (IRI) < 321%). Among the SEGs, the packing occupational group exhibited the highest comprehensive risk profile (IRI ≥ 379%) and was consequently identified as the top priority for control interventions in accordance with the Pareto principle. This risk-based framework offers a systematic approach for prioritizing occupational health interventions and optimizing resource allocation in industrial environments. Clinical trial number: This is not applicable.
- New
- Research Article
- Feb 6, 2026
- ArXiv
- Nerea Encina-Baranda + 7 more
Positron range (PR) limits spatial resolution and quantitative accuracy in PET imaging, particularly for high-energy positron-emitting radionuclides like 68Ga. We propose a deep learning method using 3D residual encoder-decoder convolutional neural networks (3D RED-CNNs), incorporating tissue-dependent anatomical information through a u-map-dependent loss function. Models were trained with realistic simulations and, using initial PET and CT data, generated positron range corrected images. We validated the models in simulations and real acquisitions. Three 3D RED-CNN architectures, Single-channel, Two-channel, and DualEncoder, were trained on simulated PET datasets and evaluated on synthetic and real PET acquisitions from 68Ga-FH and 68Ga-PSMA-617 mouse studies. Performance was compared to a standard Richardson-Lucy-based positron range correction (RL-PRC) method using metrics such as mean absolute error (MAE), structural similarity index (SSIM), contrast recovery (CR), and contrast-to-noise ratio (CNR). CNN-based methods achieved up to 19 percent SSIM improvement and 13 percent MAE reduction compared to RL-PRC. The Two-Channel model achieved the highest CR and CNR, recovering lung activity with 97 percent agreement to ground truth versus 77 percent for RL-PRC. Noise levels remained stable for CNN models (approximately 5.9 percent), while RL-PRC increased noise by 5.8 percent. In preclinical acquisitions, the Two-Channel model achieved the highest CNR across tissues while maintaining the lowest noise level (9.6 percent). Although no ground truth was available for real data, tumor delineation and spillover artifacts improved with the Two-Channel model. These findings highlight the potential of CNN-based PRC to enhance quantitative PET imaging, particularly for 68Ga. Future work will improve model generalization through domain adaptation and hybrid training strategies.
- New
- Research Article
- 10.1017/jfm.2026.11154
- Feb 6, 2026
- Journal of Fluid Mechanics
- Fu-Yang Yu + 4 more
This study implements blowing/suction control for aerofoil trailing-edge noise and systematically optimises blowing/suction angles and control locations within a Bayesian framework. Two distinct rounds were conducted for direct and sound-source-oriented coarse-grained Bayesian optimisations. In the direct optimisation, the mean overall sound pressure level of far-field noise is selected as the objective function. Optimal control parameters were obtained after 15 iterations, requiring 80 three-dimensional implicit large eddy simulations, and achieved a noise reduction of up to 3.7 dB. To reduce the substantial computational cost, a Gaussian process surrogate model was constructed using the sound source defined by multi-process acoustic theory. This enabled a second round of optimisation, termed sound-source-oriented coarse-grained Bayesian optimisation, which yielded comparable noise reduction. This refined approach exhibited low signal delay and rapid statistical convergence, which can significantly reduce both the computational cost per sampling and the iteration number. Consequently, the total computational cost was reduced to approximately one-sixth of the initial direct optimisation. Moreover, physical insights into noise reduction mechanisms were elucidated through dynamic mode decomposition (DMD), anisotropic invariant mapping and the analysis of source terms within the TNO model across several typical cases. The results indicate that the blowing-control case induces large-scale vortex shedding and enhances DMD mode energy and low-frequency noise emission. Furthermore, the suction control tends to disrupt coherent structures, reduce DMD mode energy and suppress radiated noise. Crucially, the suction control significantly decreases mean velocity gradients within the logarithmic layer and suppresses wall-normal Reynolds stresses, thereby considerably reducing TNO source intensity in this critical region. The optimal case exhibits superior performance across all metrics above, thus laying the foundation for the optimal control strategy. Additionally, the suction control facilitates attenuating the footprint of turbulent motions in wall-pressure fluctuations through pressure-velocity coherence analysis, hence promoting noise reduction. This work introduces a novel framework that integrates Bayesian optimisation with advanced noise diagnostic theory, and provides actionable insights for effective trailing-edge noise mitigation.
- New
- Research Article
- 10.62225/2583049x.2026.6.1.5754
- Feb 6, 2026
- International Journal of Advanced Multidisciplinary Research and Studies
- Isaac Gondwe + 1 more
Face recognition, as a prominent biometric modality, has witnessed significant growth over the past two decades, becoming pivotal in various commercial and security applications, particularly in identity validation and recognition (Turk & Pentland, 1991) [29]. The evolution of reliable security systems, coupled with advancements in image processing and pattern recognition, has spurred research into new methodologies for enhancing face recognition systems. This thesis focuses on designing and developing a facial login system utilizing MATLAB and ORB (Oriented FAST and Rotated BRIEF) algorithms to provide a secure and efficient authentication mechanism (Rublee et al., 2011) [40]. This thesis aims to design and implement a robust automated face recognition system capable of handling diverse environmental conditions, such as varying noise levels, illumination changes, and occlusions. The ORB algorithm is leveraged for its efficiency in feature detection and description, offering computational benefits over traditional methods (Lowe, 2004). A novel method based on the illumination-reflectance model is proposed for illumination-invariant feature extraction. This approach is computationally efficient and does not rely on prior knowledge of face models or illumination conditions. Additionally, a weighted voting scheme leveraging mutual information and entropy is introduced to improve performance under illumination variations and mitigate occlusions (Viola & Jones, 2001) [30]. This ensemble classifier approach effectively minimizes the impact of environmental factors on face image quality. Through experimental validation, the proposed system demonstrates significant advancements in overcoming challenges commonly faced by existing face recognition systems. By achieving robustness against illumination changes and occlusions, it contributes to enhancing the reliability and applicability of facial biometrics in real-world scenarios.
- New
- Research Article
- 10.1177/10519815251409132
- Feb 5, 2026
- Work (Reading, Mass.)
- Sajja Poojith + 7 more
BackgroundLawnmower operators are exposed to high noise and hand-arm vibration (HAV) during their work, which originates mainly from the engine and rotating parts. Higher exposure causes intangible issues for the well-being of the operators, results in immediate and long-term effects on the health, comfort, and safety of the operators.ObjectiveThe study involved measuring noise and HAV from a powered cylindrical lawnmower, developing retrofittable interventions, and evaluating noise and HAV levels with the interventions, and compared their effectiveness using a health risk assessment.MethodsThe study was done at three speeds and two modes of operation. The measured noise and HAV amplitudes exceeded the permissible limits of international standards. Higher amplitudes were observed at resonant frequencies of the ear and hand. To mitigate the exposure and increase the safe working hours of the operators, two interventions were developed and retrofitted to the existing lawnmower. The noise and HAV were measured with interventions and compared against pre-intervention phase. Operator's physiological, psychophysical, and postural parameters were also assessed in the lawnmower operation.ResultsThe developed interventions reduced the noise level from approximately 95 dB(A) to 85 dB(A), satisfying it within internationally permissible limits. HAV has been reduced from 23 ms-2 to below 10 ms-2, thereby increasing the safe exposure time by approximately 2.3 times with interventions. However, the operator's physiological, psychophysical, and postural parameters remained unchanged as operational requirements remained the same.ConclusionNoise and HAV reduction through interventions provided a safer working environment for the lawnmower operators.
- New
- Research Article
- 10.1080/10556788.2026.2617623
- Feb 5, 2026
- Optimization Methods and Software
- Nail Bashirov + 2 more
Recently gradient-free optimization methods have become a major tool in reinforcement learning and memory-efficient LLM fine-tuning. Under the standard setting of uniformly bounded noise variance an optimal accelerated algorithm has been derived. However, the assumption of bounded variance is strict and usually is not fulfilled in practice. Therefore, we will relax it, allowing the noise distribution to be heavy-tailed and, thus, broadening the class of problems to be solved. We propose gradient-free algorithms with zeroth-order oracle under adversarial noise with unbounded variance, for non-smooth convex and convex-concave optimization problems. We apply clipping operator to deal with heavy-tailedness and batching to allow efficient computation via parallelization. Our analysis provides asymptotic bounds for such key parameters as iteration complexity, oracle complexity and maximal adversarial noise level.
- New
- Research Article
- 10.3390/heritage9020057
- Feb 3, 2026
- Heritage
- Alberto Bucciero + 9 more
The H2IOSC project aims to establish a federated cluster of European distributed research infrastructures involved in the humanities and cultural heritage sectors, with operating nodes across Italy. Through four key RIs—DARIAH-IT, CLARIN, OPERAS, and E-RIHS—the project promotes collaboration among researchers with interdisciplinary expertise. Within this framework, DIGILAB functions as the digital access platform for the Italian node of E-RIHS. Conceived as a socio-technical infrastructure for the Heritage Science community, DIGILAB is designed to manage heterogeneous data and metadata through advanced knowledge graph representations. The platform adheres to the FAIR principles and supports the complete data lifecycle, enabling the development and maintenance of Heritage Digital Twins. DIGILAB integrates diverse categories of information related to cultural sites and objects, encompassing historical and artistic datasets, diagnostic analyses, 3D models, and real-time monitoring data. This monitoring capability is achieved through the deployment of cutting-edge Internet of Things (IoT) technologies and large-scale Wireless Sensor Networks (WSNs). As part of DIGILAB, we developed SENNSE (v1.0), a fully open hardware/software platform dedicated to environmental and structural monitoring. SENNSE allows the remote, real-time observation and control of cultural heritage sites (collecting microclimatic parameters such as temperature, humidity, noise levels) and of cultural objects (collecting object-specific data including vibrations, light intensity, and ultraviolet radiation). The visualization and analytical tools integrated within SENNSE transform these datasets into actionable insights, thereby supporting advanced research and conservation strategies within the Cultural Heritage domain. In the following sections, we provide a detailed description of the SENNSE platform, outlining its hardware components and software modules, and discussing its benefits. Furthermore, we illustrate its application through two representative use cases: one conducted in a controlled laboratory environment and another implemented in a real-world heritage context, exemplified by the “Biblioteca Bernardini” in Lecce, Italy.
- New
- Research Article
- 10.1016/j.radi.2026.103341
- Feb 3, 2026
- Radiography (London, England : 1995)
- E M Gray + 11 more
Patient experience and acceptance of a lightweight, compact 3 Tesla MRI system.
- New
- Research Article
- 10.1038/s41597-026-06658-w
- Feb 3, 2026
- Scientific data
- Solomon Nsumba + 6 more
Urban noise pollution is a growing public health concern, particularly in rapidly developing cities where regulatory enforcement is limited. High noise levels have been linked to adverse health effects, including stress, sleep disturbances, and cardiovascular issues. However, data-driven noise monitoring remains scarce in many low-resource settings, limiting efforts to develop effective mitigation strategies. To address this gap, we designed and deployed a large-scale noise data collection pipeline in Kampala and Entebbe, Uganda. Our approach integrates mobile-based tools to capture short audio recordings, categorizations of the type of noise, and sound pressure levels at specific locations. Over the course of the study, we collected 61,821 such annotated noise samples across five divisions in Kampala and four wards in Entebbe. This is to our knowledge the largest existing urban sound dataset, and the first time that the acoustic landscape of a developing-world city has been characterized in such detail. It provides valuable insights into urban noise pollution, forming a foundation for future noise mapping and mitigation efforts.
- New
- Research Article
- 10.1101/gr.281001.125
- Feb 3, 2026
- Genome research
- Tianyu Liu + 5 more
The analysis of spatial transcriptomics is hindered by high noise levels and missing gene measurements, challenges that are further compounded by the higher cost of spatial data compared to traditional single-cell data. To overcome this challenge, we introduce spRefine, a deep learning framework that leverages genomic language models to jointly denoise and impute spatial transcriptomic data. Our results demonstrate that spRefine yields more robust cell- and spot-level representations after denoising and imputation, substantially improving data integration. In addition, spRefine serves as a strong framework for model pretraining and the discovery of novel biological signals, as highlighted by multiple downstream applications across datasets of varying scales. Notably, spRefine enhances the accuracy of spatial ageing clock estimations and uncovers new aging-related relationships associated with key biological processes, such as neuronal function loss, which offers new insights for analyzing ageing effect with spatial transcriptomics.
- New
- Research Article
- 10.1016/j.cmpb.2025.109200
- Feb 1, 2026
- Computer methods and programs in biomedicine
- Boyuan Tan + 4 more
TiDE-Net: A time-guided dual-encoder ResUNet for Positron Emission Tomography (PET) image denoising.