Published in last 50 years
Articles published on Dense Network
- New
- Research Article
- 10.1016/j.scitotenv.2025.180845
- Nov 7, 2025
- The Science of the total environment
- Sang-Jin Lee + 1 more
Reduced-form air quality dispersion modeling for urban scale traffic-related pollutants.
- New
- Research Article
- 10.3389/feart.2025.1695343
- Nov 6, 2025
- Frontiers in Earth Science
- Hongxuan Liu + 8 more
Flash flood hazards on the Qinghai–Tibet Plateau are surging under rapid warming and humidification, driving glacier retreat, lake outbursts, and extreme precipitation that imperil water security across Asia. To address the limitations of studies relying primarily on historical observations and lacking quantitative insights into disaster mechanisms and dynamic prediction, this study integrates a logistic regression model with the geographical detector approach. Using multi-source flash flood records from 1950 to 2023 and 12 environmental variables (elevation, slope, precipitation, river network density, land use, etc.), the analysis quantifies the interactive effects of key drivers and uncovers the nonlinear mechanisms governing flash flood sensitivity. The results indicate that: (1) the model demonstrates strong predictive capability, achieving 78% accuracy and an AUC of 0.87; (2) mean annual precipitation is the dominant factor, while its interaction with river proximity enhances the explanatory power of flash flood disasters by 37%, indicating a nonlinear reinforcing effect; and (3) high-resolution sensitivity mapping for 2023 reveals that areas of high and very high flash flood sensitivity are concentrated in the South Tibet Valley and Hengduan Mountains, aligning with regions of glacial lake expansion and frequent extreme precipitation. In contrast, medium- and low-sensitivity areas are widely distributed across the North Tibet Plateau, where arid geomorphology and sparse river networks exert dominant control. This spatial pattern corresponds closely with regional topographic, climatic, and hydrological processes. The study offers a transferable approach for dynamic flash flood risk early warning, precise disaster zoning, and improved resilience of transboundary basins on the Qinghai–Tibet Plateau.
- New
- Research Article
- 10.1088/2057-1976/ae13b4
- Nov 6, 2025
- Biomedical Physics & Engineering Express
- Jinxin Luo + 6 more
Objective. Most low-dose computed tomography (LDCT) denoising methods based on CNN have some denoising effect, but their interpretability is very low due to the black-box nature of neural networks.Approach. To address this issue, we propose a novel fully sparse-regularized convolutional sparse coding model (CSC-ST) that integrates interpretable convolutional sparse coding with a CNN-based denoising framework and design a convolutional neural network (CSCST-Net) to solve the CSC-ST model. Specifically, we develop a generalized sparse transform to enhance conventional transform sparsity, enabling the network to effectively learn and preserve the local sparsity characteristics of the original images. Furthermore, our solution integrates the Alternating Direction Method of Multipliers (ADMM) with gradient descent during the optimization process. We introduce adaptive convolutional dictionaries, enabling images to be represented with fewer sparse feature maps and reducing the number of model parameters.Main results.Experimental results on the Mayo Clinic dataset demonstrate that, compared to state-of-the-art methods, CSCST-Net demonstrates superior performance in noise removal, artifact suppression, and texture detail preservation.Significance. The effectiveness and practicability of the proposed model in practical applications have strong advantages compared with other methods.
- New
- Research Article
- 10.3390/e27111137
- Nov 5, 2025
- Entropy
- Chen Li + 3 more
The b-value is a critical parameter for gauging seismic activity and is essential for seismic hazard assessment, monitoring stress evolution in focal zones, and forecasting major earthquakes. The minimum magnitude of completeness (Mc), a key indicator of the completeness of an earthquake catalog, reflects the monitoring capability of a seismic network and serves as a crucial foundation for the accurate calculation of the b-value. We began by integrating multi-source earthquake catalogs for mainland China using the nearest-neighbor method. Building on this, we employed a combination of partitioned time-series analysis and a grid-based spatial scanning technique to systematically investigate the spatiotemporal evolution of the Mc and the b-value across mainland China and its adjacent regions. Our findings indicate the following: (1) Since the 1980s, the overall trend of Mc has shifted from high and unstable values to low and stable ones. However, significant earthquake events can cause a notable short-term increase in the Mc. (2) The b-value exhibits strong fluctuations, primarily influenced by the dual effects of the tectonic stress field and catalog completeness. These fluctuations are particularly pronounced in highly active seismic regions such as the Sichuan–Yunnan area and Taiwan, whereas the western Tibetan Plateau has consistently maintained a low b-value. (3) The spatial distributions of both the Mc and the b-value are markedly heterogeneous. By developing a unified and complete earthquake catalog for mainland China, our research highlights the qualitative leap in monitoring capabilities brought about by the continuous densification and technological upgrading of seismic networks. This dataset provides a solid foundation for future seismological research, disaster prevention practices, and especially for the development of AI-based earthquake prediction models.
- New
- Research Article
- 10.29227/im-2025-02-02-090
- Nov 5, 2025
- Inżynieria Mineralna
- Lu Yuan + 3 more
The concept of "integration of railway station and city" evolved from the TOD (Transit-Oriented Development) concept has become the future trend of the development of China's railway passenger transport hubs. Its core objective is to promote the transformation of the space within the hub station area from a "transit space" to a "destination space". Under this concept, the pedestrian system within the station area is regarded as the core link and key element connecting the hub station and the city. In this study, by constructing an accessibility evaluation model, combining data surveys and using the area ratio method, the walkability of 18 railway passenger transport hubs in four major urban agglomerations in China is evaluated, and the influencing factors causing the differences in their walkability are analyzed. The results show that there is a strong correlation between the walkability of the hub and factors such as the size of the hub, the density of the road network, express/elevated passenger drop-off lanes, pedestrian facilities, and internal transfer circulation lines. Among them, following the traffic-dominated spatial development model has become the main reason for the phenomenon of separation between the station and the city. Finally, the article puts forward measures and suggestions on how to improve the walkability of China's railway passenger transport hubs, and further strengthen the integrated development between the hubs and the city.
- New
- Research Article
- 10.3390/app152111803
- Nov 5, 2025
- Applied Sciences
- Xianguo Yan + 4 more
To address the urgent demand for high-precision positioning in power industry operations within sparse reference station areas, this paper proposes a real-time kinematic positioning method integrating BeiDou multi-antenna Precise Point Positioning–Real-Time Kinematic (PPP-RTK) with inertial measurement unit (IMU) assistance. By combining the strengths of Precise Point Positioning (PPP) and Real-Time Kinematic (RTK) technologies, we establish a multi-antenna observation model based on State Space Representation (SSR), incorporating satellite-based augmentation signals and atmospheric correction information from sparse reference station networks. Lie group theory is employed to enhance the Extended Kalman Filter (EKF) for simultaneous estimation of position, attitude, and ambiguity parameters. The integration of IMU measurements significantly improves robustness against environmental interference in dynamic scenarios. Experimental results demonstrate average positioning errors of 3.12 cm, 3.71 cm, and 6.23 cm in the East, North, and Up (ENU) directions, respectively, with an average convergence time of 1.62 min. Compared with non-IMU-augmented single-antenna PPP-RTK solutions, the proposed method achieves accuracy improvements up to 59.6% while maintaining stability in signal-occluded environments. This approach provides centimeter-level real-time positioning support for critical power grid operations in remote areas such as desert and Gobi regions, including infrastructure inspection and precise tower assembly, thereby significantly improving the efficiency of intelligent grid operation and maintenance.
- New
- Research Article
- 10.1080/17477891.2025.2583081
- Nov 5, 2025
- Environmental Hazards
- Zhongyao Yi + 2 more
ABSTRACT This study examines the emergence and function of spontaneous digital communities during wildfire emergencies through a mixed-methods analysis of four major wildfire events: the August Complex Fire (California, 2020), Dixie Fire (California, 2021), Marshall Fire (Colorado, 2021), and Maui Wildfires (Hawaii, 2023). It investigates how digital communities complement traditional emergency communication systems and contribute to community resilience. Key findings reveal that digital communities consistently emerged 6.8 h before official emergency communications, demonstrating rapid self-organisation with network density reaching 0.84 within 12 h of crisis onset. The study identified 127 distinct information hubs across the four cases, with 67% of posts including source attribution and community-based rumour control achieving average response times of 43 min. However, significant socioeconomic disparities emerged, with residents in the lowest income quintile participating 43% less than highest income residents, and rural residents facing dual challenges of higher wildfire exposure and reduced digital infrastructure access. The study contributes a new theoretical framework for understanding digital community resilience and provides evidence-based recommendations for developing more effective, equitable emergency communication systems that leverage rather than replace organic digital networks while addressing persistent inequalities in digital access and participation.
- New
- Research Article
- 10.4401/ag-9328
- Nov 4, 2025
- Annals of Geophysics
- Petr Kolínský + 6 more
Data quality checks are essential for any broad‑band seismological network, and in particular for data of temporary passive seismic experiments. These data quality checks concern (i) the availability and retrievability of the data from public data archives, (ii) the noise conditions at the stations, (iii) formal properties and the correctness of metadata, (iv) the mutual consistency between data and metadata, and finally (v) the quality of the data itself. Methods for these checks are introduced and applied to the AdriaArray Seismic Network. We present techniques for evaluating the quality of individual stations as well as techniques that allow us to detect outlying amplitudes and arrival times in case of a dense network. Results of the tests are summarized in the form of maps and in addition, details are given in an online repository. Our checks are continuously repeated and results are updated to secure high data quality. The aim of our study is to provide the user with useful information on the quality of AdriaArray data, as well as with suggestions for their own data quality assessment. In addition, the presented data quality checks form the basis for data curation by station and network operators. The suggested approaches can also be applied to other large dense seismic networks.
- New
- Research Article
- 10.1021/acsami.5c14678
- Nov 4, 2025
- ACS applied materials & interfaces
- Fan Zhao + 5 more
Solvent-driven surface instabilities in soft materials offer a powerful route to generate spontaneous patterns without external templating; yet the mechanisms governing their emergence, evolution, and long-term modulation remain elusive. Here, we uncover the time-dependent formation of quasi-periodic triradial patterns on the surface of thin, soft silicone-based viscous films undergoing hexane extraction and drying. Using dual-wavelength reflection interference contrast microscopy, we observe a reproducible morphological progression: from shallow circular domains at short extraction times to a well-defined array of triradial, three-armed patterns at longer durations, driven by the buildup of internal stress, surface-to-bulk modulus gradients, and network densification. Systematic studies across silicone elastomers and gels reveal that while the triradial patterning is broadly conserved, its geometry is tunable by factors such as cross-link density and solvent retention. These results establish a general mechanism of solvent-mediated pattern formation in soft silicone-based viscous films and offer a potential route for designing dynamic and programmable surface architectures through controlled solvent processing.
- New
- Research Article
- 10.1038/s41598-025-22349-9
- Nov 4, 2025
- Scientific Reports
- Kamilia Kemel + 8 more
This work presents a novel, non-invasive method that combines high-resolution 3D Line-field Confocal Optical Coherence Tomography (LC-OCT) images with an advanced, in-house developed automated 3D segmentation algorithm to quantitatively analyze dermal fiber characteristics in vivo. This approach marks the first in-depth investigation of dermal fibers, enabling precise characterization of age-related changes, ethnic differences, and the effects of anti-aging skincare products on the cheekbone region of Caucasian and Asian women. Our algorithm accurately extracts fiber metrics, revealing that aging correlates with shorter fiber length and increased anisotropy. Although Asians exhibited a denser fiber network than Caucasians, both ethnicities showed comparable mean fiber lengths and anisotropy. Furthermore, anti-aging skincare treatments significantly enhanced fiber length, node count, and network density while reducing anisotropy over one and three months. This innovative integration of cutting-edge imaging and algorithmic analysis provides valuable insights for cosmetic applications and paves the way for future non-invasive dermatological research.Supplementary InformationThe online version contains supplementary material available at 10.1038/s41598-025-22349-9.
- New
- Research Article
- 10.1142/s0218126626450015
- Nov 4, 2025
- Journal of Circuits, Systems and Computers
- Arash Golabi + 4 more
This paper introduces a dual-path deep learning framework specially designed for the efficient detection and classification of Hardware Trojans (HTs) through Side-Channel Analysis (SCA). Using Markov Transition Field (MTF) encoding and a reshaping strategy, the proposed method first converts side-channel time-series signals, including power traces, electromagnetic leaks, and timing data, into two different image-like formats. Detecting subtle Trojan activity requires these representations to capture intricate signal dynamics. Subsequently, each image is processed by a separate convolutional neural network (CNN) branch within the dual-path architecture, with each path optimized for extracting complementary features. To enhance classification performance, the outputs of both CNNs are fused via a dense neural network layer. The dual-path mechanism contributes to improved detection accuracy and robust feature extraction, while the overall architecture supports the classification of specific Trojan types. Evaluation using the publicly available AES Hardware Trojan dataset sourced from TrustHub and IEEE DataPort demonstrates that the proposed model benefits from CNN-based feature learning and outperforms several existing approaches.
- New
- Research Article
- 10.1093/gji/ggaf440
- Nov 4, 2025
- Geophysical Journal International
- Seungwoo Park + 3 more
Summary Gangwon Province, located in the central part of the Korean Peninsula, features northeast–southwest faults and tectonic structures formed by plutonic intrusions. Despite decades of geological investigations from near-surface to the upper crust in Gangwon Province, the lithospheric structure of this region remains poorly understood. The primary objective of this study is to identify velocity anomalies potentially associated with plutonic intrusions and to elucidate the formation processes and mechanisms governing the crustal and upper mantle structures in this region. We employed Helmholtz tomography to generate phase-velocity maps for periods of 10–40 s using a dense seismic network of 101 stations. These maps were subsequently inverted to obtain an S-wave velocity model from the upper crust to the uppermost mantle. Our results reveal northeast–southwest-trending low-velocity anomalies along major faults in central to northern Gangwon Province (i.e. eastern Gyeonggi Massif), extending to depths of approximately 25–30 km. These low-velocity anomalies align with the orientations of Jurassic granitoid intrusions formed through partial melting processes. Additionally, we identified other low-velocity anomalies, likely formed by Late Cretaceous intrusions, which are oriented perpendicular to the major faults. In contrast, the southeastern Gangwon Province (i.e. Taebaeksan Basin) exhibits a distinctly different velocity structure, lacking features indicative of granitic intrusions and showing low-velocity anomalies confined to shallow depths. The pronounced low-velocity anomalies observed at depths of 5–10 km in Taebaeksan Basin are attributed to a complex fault zone influenced by Permo-Triassic collisional orogeny.
- New
- Research Article
- 10.1161/circ.152.suppl_3.4364329
- Nov 4, 2025
- Circulation
- Harris Avgousti + 6 more
Introduction: Severe aortic regurgitation (AR) is characterized by significant retrograde blood flow in the aorta and remains difficult to quantitively evaluate by echocardiography. By providing comprehensive insights into hemodynamic changes and quantifying regurgitant fraction (RF) across various locations of the aorta, this study investigated the potential of 4D flow MRI to enhance diagnostic accuracy and inform clinical decision-making. Methods: An institutional database was queried for patients with chronic AR on echocardiography and paired cardiac MRIs with aortic 4D flow MRI. Patients with LVEF < 50%, concomitant mitral regurgitation and aortic stenosis were excluded. A fully automated 4D flow MRI processing tool, performing standard preprocessing corrections and aortic 3D segmentation using separately trained machine learning models (Dense U-net convolutional neural network architecture) was used. Through-plane flow was quantified at 7 AHA-standardized locations: aortic annulus, sinotubular junction, mid ascending aorta, distal ascending aorta, aortic arch, proximal descending aorta and mid descending aorta. 4D flow MRI-based quantifications of RF were assessed for differentiating severe AR, using echo gradings as reference classification. Adjudicated clinical outcome data included cardiac-related hospitalizations such as heart failure, arrhythmias, and inpatient management of valve intervention. Results: Of 59 patients with chronic AR, the mean age was 49 ± 14.5 years, LVEF 56.5 ± 8.3%, LV end diastolic volume 251 ± 74 mL, 90% male and 73% had bicuspid aortic valves. Receiver operator characteristic (ROC) analysis of 4D flow MRI RFs revealed the optimal anatomic location to differentiate severe AR, as graded by echo was the mid descending aorta (AUC = 0.79). In patients with moderate, moderate-severe, and severe AR on echo, Kaplan-Meyer analysis reveals significant differences in cardiac-related hospitalization rates and time to valve intervention when patients were median split by optimal mid-descending aorta ROC RF (35%) but not at other locations of the aorta nor RFs calculated by traditional 2D Phase Contrast MRI (Figure 1). Conclusion: The optimal location in discerning severe aortic regurgitation as per RF by 4D flow analysis is the mid-descending aorta. 4D flow quantified RF of 35% at the mid-descending aorta was associated with cardiac related hospitalizations.
- New
- Research Article
- 10.1161/circ.152.suppl_3.4365817
- Nov 4, 2025
- Circulation
- Francois Chesnais + 2 more
Modeling cardiac disease has been a central goal in tissue engineering. Engineered human cardiac tissues now serve as valuable platforms for studying heart tissue replacement, disease modeling, and advancing therapeutic discovery. However, we are yet to recreate the complex and functional human myocardium in vitro , particularly with respect to integrating perfused vasculature and immune components. The lack of vasculature continues to significantly limit the ability to study conditions that involve disruptions in blood flow, such as coronary heart disease and ischemia-reperfusion injury. To this end, we engineered an in vitro model of human myocardium from a single iPSC line encompassing the different cellular landscape of the native myocardium to more faithfully recapitulate the functionality of adult human heart. The vascularized cardiac tissues were generated using WTC11-hiPSCs differentiated into cardiomyocytes (iCM), cardiac fibroblasts (iCF), endothelial cells (iEC) and resident macrophages (irMf) and assessed for the appropriate cell-identity markers via immunofluorescent staining. Tissues were generated by encapsulating cells in fibrin hydrogel and culturing the resulting cell-hydrogel constructs stretched between two elastic pillars. The tissues were cultured for 7 days and electrically stimulated for another 7 days. Immunofluorescent staining was performed to assess capillary formation. We demonstrate that hiPSC-derived endothelial cells have the capacity to form dense vascular networks that are aligned with hiPSC-cardiomyocytes in the direction of tissue contraction. After 7 days of culture, the tissues were highly vascularized with an interconnected network of capillaries demonstrating active angiogenic sprouting. We further observed a close interaction between iCF and iECs with fibroblasts adopting a perivascular-like morphology and wrapping around capillaries with open lumens. Finally, the incorporation of hiPSC-derived macrophages and application of electrical stimulation enabled long-term survival of vascular networks and increased the network complexity. This work paves the way for the development of autologous vascularized cardiac tissues, to support patient-specific studies of cardiovascular diseases.
- New
- Research Article
- 10.1093/fqsafe/fyaf063
- Nov 4, 2025
- Food Quality and Safety
- Run Quan + 5 more
Abstract Objectives To enhance the quality of tofu produced using brine (MgCl2), this study investigated the effects of an emulsion-controlled-release coagulant (ECRC) prepared through ultrasonic emulsification on the physical and gelation properties of tofu. Materials and Methods The ECRC were prepared using different ultrasonic amplitudes. The average particle size, particle size distribution, viscosity, stability, and Mg²⁺ encapsulation rate were measured to determine the optimal preparation conditions. Then, the Mg²⁺ controlled-release properties and storage stability of ECRC were measured, and the effects of ECRC on tofu quality were investigated. Results The particle size distribution of the ECRC was uniform when the ultrasonic amplitude was 80%, with an average particle size of 638.12±16.23 nm, excellent emulsion stability, and a Mg2+ encapsulation rate of 51.23%±0.02%. In comparison to MgCl2, the ECRC significantly extended the coagulation time for soybean protein, increasing it from 9.00±0.82 to 28.00±1.00 s. Concurrently, it facilitated the formation of a uniform and dense gel network within the tofu while markedly enhancing its moisture content. The hardness, elasticity, texture, and overall appearance of tofu prepared using ECRC were superior to those of traditional brine coagulant (TBC) tofu, thereby improving both its physical attributes and textural quality. Conclusions This study demonstrated that employing ultrasonic emulsification in preparing emulsion buffer coagulants effectively prolongs soybean protein gelation time and enhances tofu quality. These findings provide a theoretical foundation for the industrial-scale production of tofu.
- New
- Research Article
- 10.1007/s11403-025-00462-2
- Nov 4, 2025
- Journal of Economic Interaction and Coordination
- Brigitta Tóth-Bozó + 1 more
Abstract This paper develops a credibility-weighted DeGroot-type agent model to examine how economic expectations evolve within directed, weighted networks. After reviewing key expectation types, it explores how unequal influence and the presence of opinion leaders shape collective dynamics. The model is tested on random and scale-free network structures with varying levels of connectivity, initial expectation distributions, and reliability indices. Results indicate that network topology is the dominant factor influencing aggregate outcomes. Dense networks tend to reach rapid consensus, while scale-free structures sustain persistent heterogeneity and slower convergence. Sparse networks display partial and delayed alignment of expectations. Opinion leaders meaningfully shift aggregate expectations only when both highly credible and well connected; otherwise, their influence remains localized. By linking micro-level interaction patterns to macro-level expectation outcomes, the study contributes to refining the understanding of opinion dynamics in an economic context and suggests potential implications for expectation modelling and policy communication in interconnected systems.
- New
- Research Article
- 10.3390/radiation5040032
- Nov 3, 2025
- Radiation
- Mostafa Zahed + 1 more
Fractal dimension (Frac) and lacunarity (Lac) are frequently proposed as biomarkers of multiscale image complexity, but their incremental value over standardized radiomics remains uncertain. We position both measures within the Image Biomarker Standardisation Initiative (IBSI) feature space by running a fully reproducible comparison in two settings. In a baseline experiment, we analyze N=1000 simulated 64×64 textured ROIs discretized to Ng=64, computing 92 IBSI descriptors together with Frac (box counting) and Lac (gliding box), for 94 features per ROI. In a wavelet-augmented experiment, we analyze N=1000 ROIs and add level-1 wavelet descriptors by recomputing first-order and GLCM features in each sub-band (LL, LH, HL, and HH), contributing 4×(19+19)=152 additional features and yielding 246 features per ROI. Feature similarity is summarized by a consensus score that averages z-scored absolute Pearson and Spearman correlations, distance correlation, maximal information coefficient, and cosine similarity, and is visualized with clustered heatmaps, dendrograms, sparse networks, PCA loadings, and UMAP and t-SNE embeddings. Across both settings a stable two-block organization emerges. Frac co-locates with contrast, difference, and short-run statistics that capture high-frequency variation; when wavelets are included, detail-band terms from LH, HL, and HH join this group. Lac co-locates with measures of large, coherent structure—GLSZM zone size, GLRLM long-run, and high-gray-level emphases—and with GLCM homogeneity and correlation; LL (approximation) wavelet features align with this block. Pairwise associations are modest in the baseline but become very strong with wavelets (for example, Frac versus GLCM difference entropy, which summarizes the randomness of gray-level differences, with |r|≈0.98; and Lac versus GLCM inverse difference normalized (IDN), a homogeneity measure that weights small intensity differences more heavily, with |r|≈0.96). The multimetric consensus and geometric embeddings consistently place Frac and Lac in overlapping yet separable neighborhoods, indicating related but non-duplicative information. Practically, Frac and Lac are most useful when multiscale heterogeneity is central and they add a measurable signal beyond strong IBSI baselines (with or without wavelets); otherwise, closely related variance can be absorbed by standard texture families.
- New
- Research Article
- 10.3390/land14112183
- Nov 3, 2025
- Land
- Can Wang + 2 more
Guided by the “Healthy China” initiative, understanding the impact of the built environment on running behavior is essential for encouraging regular physical activity and advancing public health. This study addresses a critical gap in healthy city research by examining the spatial heterogeneity in how urban environmental contexts affect residents’ running preferences. Focusing on two contrasting areas of Suzhou, namely the historic Gusu District and the modern Industrial Park District, we developed a 5Ds-based analytical framework (density, accessibility, diversity, design, and visual) that incorporates Suzhou’s unique water networks and street features. Methodologically, we used Strava heatmap data and multi-source environmental indicators to quantify built-environment attributes and examined their relationships with running-space selection. We applied linear regression and interpretable machine learning to reveal overall associations, while geographically weighted regression (GWR) was used to capture spatial variations. Results reveal significant spatial heterogeneity in how the built environment influences running-space selection. While the two districts differ in their urban form, runners in Gusu District prefer dense and compact street networks, whereas those in Industrial Park District favor open, natural spaces with higher levels of human vibrancy. Despite these differences, both districts show consistent preferences for spaces with a more intense land use mix, stronger transportation accessibility, and larger parks and green spaces. The multi-dimensional planning strategies derived from this study can improve the urban running environment and promote the health and well-being of residents.
- New
- Research Article
- 10.3389/frobt.2025.1674421
- Nov 3, 2025
- Frontiers in Robotics and AI
- Kamilya Smagulova + 4 more
Autonomous driving has the potential to enhance driving comfort and accessibility, reduce accidents, and improve road safety, with vision sensors playing a key role in enabling vehicle autonomy. Among existing sensors, event-based cameras offer advantages such as a high dynamic range, low power consumption, and enhanced motion detection capabilities compared to traditional frame-based cameras. However, their sparse and asynchronous data present unique processing challenges that require specialized algorithms and hardware. While some models originally developed for frame-based inputs have been adapted to handle event data, they often fail to fully exploit the distinct properties of this novel data format, primarily due to its fundamental structural differences. As a result, new algorithms, including neuromorphic, have been developed specifically for event data. Many of these models are still in the early stages and often lack the maturity and accuracy of traditional approaches. This survey paper focuses on end-to-end event-based object detection for autonomous driving, covering key aspects such as sensing and processing hardware designs, datasets, and algorithms, including dense, spiking, and graph-based neural networks, along with relevant encoding and pre-processing techniques. In addition, this work highlights the shortcomings in the evaluation practices to ensure fair and meaningful comparisons across different event data processing approaches and hardware platforms. Within the scope of this survey, system-level throughput was evaluated from raw event data to model output on an RTX 4090 24GB GPU for several state-of-the-art models using the GEN1 and 1MP datasets. The study also includes a discussion and outlines potential directions for future research.
- New
- Research Article
- 10.5194/acp-25-14387-2025
- Nov 3, 2025
- Atmospheric Chemistry and Physics
- Dominik Brunner + 9 more
Abstract. Urban areas are significant contributors to global CO2 emissions, requiring detailed monitoring to support climate neutrality goals. This study presents a high-resolution modeling framework using GRAMM/GRAL, adapted for simulating atmospheric CO2 concentrations from anthropogenic and biospheric sources and sinks in Zurich, Switzerland. The framework resolves atmospheric concentrations at the building scale, and it employs a detailed inventory of anthropogenic emissions as well as biospheric fluxes, which were calculated using the Vegetation Photosynthesis and Respiration Model (VPRM). Instead of simulating the full dynamics of meteorology and atmospheric transport, the dispersion of CO2 is precomputed for more than 1000 static weather situations, from which the best match is selected for any point in time based on the simulated and measured meteorology in and around the city. In this way, time series over multiple years can be produced with minimal computational cost. Measurements from a dense network of mid-cost CO2 sensors are used to validate the model, demonstrating its capability to capture spatial and temporal CO2 variability. Applications to other cities are discussed, emphasizing the need for high-quality input data and tailored solutions for diverse urban environments. The work contributes to advancing urban CO2 monitoring strategies and their integration with policy frameworks.