Articles published on Variational model
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
7396 Search results
Sort by Recency
- New
- Research Article
- 10.1177/14759217251393117
- Dec 4, 2025
- Structural Health Monitoring
- Long Zhao + 3 more
To address the challenges of high concealability and difficulty in identifying minor damage in transmission lines, as well as low fault diagnosis accuracy under strong noise conditions, a novel fault diagnosis method for transmission lines is proposed. A data processing method based on adaptive modal filtering is proposed by combining a variational constraint model with an adaptive frequency band extraction strategy. Subsequently, by leveraging the concept of the generalized Fourier transform, pseudopeak effects near modal frequencies are suppressed, achieving thorough noise signal filtering without altering the intrinsic state characteristics of the transmission lines. For fault diagnosis, a convolutional neural network enhanced with an attention module is constructed, and a fault diagnosis model integrated with bidirectional long short-term memory (BiLSTM) is proposed. By embedding a convolutional block attention module, network weights are dynamically adjusted to enhance feature representation in both channel and spatial dimensions. Additionally, the introduction of BiLSTM strengthens the model’s ability to process time series data. Finally, the proposed method is validated on a conductor vibration test platform, demonstrating its high diagnostic accuracy and superior performance in noisy environments compared with other models.
- New
- Research Article
- 10.1145/3770685
- Dec 2, 2025
- Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
- Fulong Liu + 7 more
Recent advances have demonstrated the great potential of millimeter-wave (mmWave) signals for contactless cardiac sensing, enabling various applications such as heart rate variability (HRV) tracking and arrhythmia detection. However, the absence of structural characterization in mmWave cardiac signals, combined with complex interference, makes it difficult to separate cardiac rhythm, cardiac pattern, and external interference. As a result, existing methods predominantly process these entangled components jointly. This missing step of feature disentanglement, which is fundamental to biosignal analysis, severely limits performance and hinders the translation of mmWave cardiac sensing into clinical practice. In this work, we propose the first disentangled feature learning framework for contactless cardiac mmWave sensing. The key innovation lies in the design of a variational feature disentanglement model, which embeds structural priors of mmWave cardiac signals to construct an inductive bias that facilitates effective disentanglement. This design enables the disentanglement of three key components from signals: cardiac-irrelevant interference features, intrinsic cardiac rhythm features, and intrinsic cardiac pattern features, thereby supporting effective cardiac sensing. We evaluate our framework on a large-scale clinical dataset comprising 7,090 outpatients. Experimental results demonstrate superior performance compared to baseline methods, achieving 27.96% and 28.42% average improvements in HRV tracking and arrhythmia detection tasks, respectively, thus highlighting its strong potential to bridge the gap between mmWave sensing technology and real-world clinical applications.
- New
- Research Article
- 10.1117/1.jbo.30.12.126005
- Dec 1, 2025
- Journal of Biomedical Optics
- Xin Wang + 2 more
.SignificancePhotoacoustic tomography (PAT) is an emerging biomedical imaging technology that offers high contrast and high resolution, showing great potential for applications in medical imaging. However, existing regularization methods often lead to instability and artifacts in the reconstruction due to imbalanced regularization parameter settings. To address these issues, we propose a reconstruction algorithm based on the L-alternating direction method of multipliers (ADMM) for PAT, which significantly improves image reconstruction quality and has high clinical application potential.AimWe introduce a nonconvex L1–L2 norm into the variational model and employ the ADMM to decompose the optimization problem into efficiently solvable subproblems. A preconditioned conjugate gradient (PCG) method is further integrated to accelerate the solution of linear systems, thereby improving both reconstruction accuracy and computational efficiency.ApproachWe propose an L-ADMM framework with adaptive weighted L1–L2 regularization for PAT reconstruction. The method employs ADMM to split the optimization into tractable subproblems and uses PCG to efficiently solve linear systems. It achieves stable, high-quality reconstruction under sparse sampling by enhancing sparsity while preserving structural details.ResultsExperiments on vascular and breast models demonstrate that, even with only 64 transducers under sparse sampling, the proposed L-ADMM method achieves peak signal-to-noise ratio values of 37.24 and 36.26 dB and structural similarity index measure values of 0.9766 and 0.9665, respectively. Compared with L2, L1 + L2, L1–L2, TV regularization, and U-Net methods, the proposed algorithm substantially improves image quality, highlighting its feasibility for cost-effective clinical PAT.ConclusionsThe proposed L-ADMM-based reconstruction algorithm, by integrating adaptive regularization with efficient optimization, significantly improves PAT image quality under sparse sampling conditions, offering a feasible solution with strong potential for clinical translation.
- New
- Research Article
- 10.1523/eneuro.0203-25.2025
- Dec 1, 2025
- eNeuro
- Cristina E María-Ríos + 2 more
The "sign-tracking" and "goal-tracking" model of individual variation in associative learning permits the identification of rats with different cue-reactivity and predisposition to addiction-like behaviors. Certainly, compared to "goal-trackers" (GTs), "sign-trackers" (STs) show more susceptibility traits such as increased cue-induced 'relapse' of drugs of abuse. Different cue- and reward-evoked patterns of activity in the nucleus accumbens (NAc) have been a hallmark of the ST/GT phenotype. However, it is unknown whether differences in the intrinsic neuronal properties of NAc medium spiny neurons (MSNs) in the core and shell subregions are also a physiological correlate of these phenotypes. We performed whole-cell slice electrophysiology in outbred male rats and found that STs exhibited the lowest excitability in the NAc core, with lower number of action potentials and firing frequency as well as a blunted voltage/current relationship curve in response to hyperpolarized potentials in both the NAc core and shell. Although firing properties of shell MSNs did not differ between STs and GTs, intermediate responders that engage in both behaviors showed greater excitability compared to both STs and GTs. These findings suggest that intrinsic excitability in the NAc may contribute to individual differences in the attribution of incentive salience.Significance Statement During associative learning, cues acquire predictive value, but in some instances, they also acquire incentive salience, meaning they take on some of the motivational properties of the reward. The propensity to attribute cues with incentive salience varies between individuals, and excessive attribution can lead to maladaptive behaviors. The "sign-and goal-tracking" model allows us to isolate these two properties and disambiguate the neurobiological processes that govern them. To our knowledge this is the first study characterizing passive and active membrane properties of MSNs in the NAc core and shell of STs and GTs, as well as IRs. These findings are meant to better inform investigations of the distinct role of the NAc in reward learning, particularly in the attribution of incentive salience and addiction predisposition.
- New
- Research Article
- 10.1016/j.indic.2025.101034
- Dec 1, 2025
- Environmental and Sustainability Indicators
- Meng Cao + 1 more
Modified InVEST model for spatiotemporal variations of water conservation and its driving factors in the southwestern alpine Canyon region of China from 1990 to 2020
- New
- Research Article
- 10.1145/3763331
- Dec 1, 2025
- ACM Transactions on Graphics
- Pengfei Wang + 13 more
Neural implicit shape representation has drawn significant attention in recent years due to its smoothness, differentiability, and topological flexibility. However, directly modeling the shape of a neural implicit surface, especially as the zero-level set of a neural signed distance function (SDF), with sparse geometric control is still a challenging task. Sparse input shape control typically includes 3D curve networks or, more generally, 3D curve sketches, which are unstructured and cannot be connected to form a curve network, and therefore more difficult to deal with. While 3D curve networks or curve sketches provide intuitive shape control, their sparsity and varied topology pose challenges in generating high-quality surfaces to meet such curve constraints. In this paper, we propose NeuVAS, a variational approach to shape modeling using neural implicit surfaces constrained under sparse input shape control, including unstructured 3D curve sketches as well as connected 3D curve networks. Specifically, we introduce a smoothness term based on a functional of surface curvatures to minimize shape variation of the zero-level set surface of a neural SDF. We also develop a new technique to faithfully model G 0 sharp feature curves as specified in the input curve sketches. Comprehensive comparisons with the state-of-the-art methods demonstrate the significant advantages of our method.
- New
- Research Article
- 10.1016/j.srs.2025.100264
- Dec 1, 2025
- Science of Remote Sensing
- Minh Tri Le + 5 more
High spatial resolution crop type and land use land cover classification without labels: A framework using multi-temporal PlanetScope images and variational Bayesian Gaussian mixture model
- New
- Research Article
- 10.1371/journal.pone.0336022.r006
- Dec 1, 2025
- PLOS One
- Solomon Sisay Mulugeta + 3 more
Background: Malaria is a life-threatening infectious disease caused by parasites of the genus Plasmodium transmitted through the bite of infected female Anopheles mosquitoes, which act as vectors of the disease. It affects approximately 219 million people globally and results in 435,000 deaths each year. Fever, chills, and exhaustion are among of the signs of this illness. If left untreated, these symptoms can develop into serious problems like anemia, respiratory distress, and even organ failure. By identifying determinants related to malaria prevalence, this study supports evidence-based national malaria prevention and control initiatives. The results help improve decision-making for malaria control efforts and guide focused public health initiatives by identifying areas with a high malaria burden.Methods: Data from the 2021 Niger Malaria Indicator Survey (NMIS) is used, focusing on RDT-confirmed malaria cases in children aged 6-59 months. The dataset includes individual, household, and community-level variables, such as age, household income, education, healthcare access, and geographic coordinates. Spatial distribution of malaria prevalence is first visualized through maps and hot spot analysis to identify areas with high and low malaria rates. Random effects are incorporated to capture unobserved heterogeneity between regions and communities, allowing for more accurate estimates of malaria prevalence by adjusting for spatial clustering. Multilevel logistic regression models are applied to account for the hierarchical structure of the data. Model fit is evaluated using standard criteria (AIC, BIC and DIC), and diagnostics are performed to ensure reliability.Results: 1121 (23.7%) of the 4724 children aged 6 to 59 months who were examined had positive RDT results for malaria. Malaria prevalence in Niger among children aged 6–59 months is significantly clustered (Moran′s I = 0.434, p < 0.001), revealing distinct hotspots and cold spots unlikely due to chance. Model III provides a better fit for RDT prevalence among children aged 6-59 months with malaria, as indicated by the smallest AIC, BIC, and deviation statistics compared to other reduced models. Malaria prevalence was associated with factors, including child age, anemia levels, maternal education, the number of children sleeping under bed nets, the use of insecticide-treated nets, the number of children aged 5 and under, as well as residence and region.Conclusion: The findings show that malaria prevalence among children aged 6-59 months in Niger is significantly influenced by factors such as child age, anemia levels, maternal education, and bed net usage, emphasizing the need for improved coverage of insecticide-treated nets and tailored interventions based on local conditions.
- New
- Research Article
- 10.1111/iji.70029
- Nov 29, 2025
- International journal of immunogenetics
- Jerzy K Kulski
The human major histocompatibility complex (MHC) is characterized by extreme polymorphism, with HLA-C contributing to pathogen defence, disease susceptibility, and transplantation outcomes. Beyond allelic diversity, and variation, the evolutionary restructuring of haplotypes influences functional diversity across the region. This study analysed HLA-C haplotypes in the context of transposable element (TE) architecture and single-nucleotide polymorphism (SNP) patterns to identify conserved modules and ancestral recombination boundaries. Paired genomic alignments of fully phased homozygous lymphoblastoid cell lines carrying 36 haplotypes of HLA-C*01-C*07, C*12, and C*16 allelic groups were performed using Mauve to define locally co-linear blocks. SNP density plots were generated to visualize transitions between SNP-rich and SNP-poor regions. Six diverse HLA-C*07 haplotypes (linked to HLA-B*07, *08, *18, *49, *57 and *58) were examined as a primary case study. Particular focus was placed on crossover zones where SNP transitions coincided with TE boundaries, indicating putative ancestral recombination breakpoints. Comparative analyses revealed extensive structural variation among C*07 haplotypes and across the broader C*01-C*16 series. The C*07:02 homologs exhibited significantly higher SNP density (mean=1.87±0.44SNPs/kb, n=10) than C*07:01 and C*07:18 homologs (mean=0.29±0.21SNPs/kb, n=20; p<0.001). Abrupt SNP transitions frequently aligned with SINE, LINE, and LTR elements (e.g., Alu, L1, L2, HERV), marking recurrent TE-associated junctions. These breakpoints defined shared homozygous HLA-C segments spanning ∼4kb to ∼4Mb, consistent with mosaic haplotype evolution through recombination of conserved modules. HLA-C haplotypes exhibit modular mosaic structures shaped by recurrent recombination at TE-associated crossover zones. Thus, MHC haplotypes may share the same HLA-C allele, yet differ in the surrounding HLA-B and class I genomic organization, preserving or disrupting co-adapted functional units. Incorporating haplotypic mosaicism, rather than focusing solely on allelic polymorphism, may improve models of immune variation, disease risk, and translation matching.
- New
- Research Article
- 10.3390/app152312560
- Nov 27, 2025
- Applied Sciences
- Runkan Liu + 2 more
To solve the timing uncertainty in low-voltage circuits, this paper proposes an analytical sequential path delay model based on the lognormal distribution. Unlike previous works that primarily focus on combinational logic, our model provides a complete framework for sequential elements. It decouples inter-stage correlations within flip-flops through a linear delay transformation. The model’s key innovation is a one-shot characterization approach that dramatically reduces simulation time compared to traditional Monte Carlo methods. All experiments on the TSMC 28 nm process show high accuracy, with average errors below 6% against MC. Our model demonstrates significant prediction accuracy improvements of up to 10.2X over prior art, establishing a highly efficient and accurate solution for variation-aware sequential timing analysis.
- New
- Research Article
- 10.1080/01431161.2025.2577973
- Nov 25, 2025
- International Journal of Remote Sensing
- Yongxin Li + 3 more
ABSTRACT Panchromatic sharpening is a data fusion technique that combines high-resolution sresolution multispectral (HRMS) imagery. This paper proposes a novel panchromatic sharpening method that aims to mitigate texture loss in the fused image by incorporating fractional derivatives. The proposed model integrates three key components: a fractional derivative-guided regularization term, a PAN constraint term, and a traditional spectral fidelity term. The existence and uniqueness of the model’s minimum is rigorously proven within the corresponding functional space. An iterative method based on the discrete Fourier transform is employed to solve the model in the frequency domain. Experimental results, based on five real-world datasets, validate the effectiveness of the proposed method. A variety of spatial and spectral metrics are used to quantitatively assess the quality of the panchromatic sharpening results. The findings indicate that the proposed method significantly enhances sharpening performance and improves both spatial and spectral image quality compared to other widely used panchromatic sharpening techniques.
- New
- Research Article
- 10.3390/s25237198
- Nov 25, 2025
- Sensors
- Yuxue Feng + 5 more
Images captured by vision sensors in outdoor environments often suffer from haze-induced degradations, including blurred details, faded colors, and reduced visibility, which severely impair the performance of sensing and perception systems. To address this issue, we propose a haze-removal algorithm for hazy images using multiple variational constraints. Based on the classic atmospheric scattering model, a mixed variational framework is presented that incorporates three regularization terms for the transmission map and scene radiance. Concretely, an ℓp norm and an ℓ2 norm were constructed to jointly enforce the transmissions for smoothing the details and preserving the structures, and a weighted ℓ1 norm was devised to constrain the scene radiance for suppressing the noises. Furthermore, our devised weight function takes into account both the local variances and the gradients of the scene radiance, which adaptively perceives the textures and structures and controls the smoothness in the process of image restoration. To address the mixed variational model, a re-weighted least square strategy was employed to iteratively solve two separated subproblems. Finally, a gamma correction was applied to adjust the overall brightness, yielding the final recovered result. Extensive comparisons with state-of-the-art methods demonstrated that our proposed algorithm produces visually satisfactory results with a superior clarity and vibrant colors. In addition, our proposed algorithm demonstrated a superior generalization to diverse degradation scenarios, including low-light and remote sensing hazy images, and it effectively improved the performance of high-level vision tasks.
- New
- Research Article
- 10.1038/s41598-025-25291-y
- Nov 21, 2025
- Scientific Reports
- Muzaffar Bashir Arain + 6 more
This paper presents a control of adaptive regularization for a hybrid variational model and its application to the denoising of color images, which combines total variation (TV) and L^2 regularizers with normalized data fidelity. The designed adaptive control works locally and is performed by a control parameter that intelligently selects the appropriate diffusion operator for smoothing (quadratic) and edge preservation (nonquadratic). In addition to combined diffusion operators, data term was normalized to ensure good balance between diffusion and fidelity terms. The idea of complementary data normalization enhances performance under high noise levels and mitigates artifacts. The resulting optimization framework leads to a time-dependent partial differential equation that is discretized using standard finite differences. Simulation experiments on benchmark data sets demonstrated that the proposed method consistently outperformed over conventional denoising techniques in terms of edge preservation, noise reduction and computational efficiency. Quantitative evaluation was performed using the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), root mean square error (RMSE), and convergence time (CT). A comparative analysis with state-of-the-art variational denoising models further highlights good performance of proposed approach in preserving sharp structural details while achieving effective noise suppression.
- New
- Research Article
- 10.1093/mnras/staf2071
- Nov 21, 2025
- Monthly Notices of the Royal Astronomical Society
- Satvik Mishra + 2 more
Abstract The distribution of 21 cm emission from neutral hydrogen is a powerful cosmological and astrophysical probe, as it traces the underlying dark matter and cold gas distributions throughout cosmic times. However, the prediction of observable signals is hindered by the large computational costs of the required hydrodynamic simulations. We introduce a novel machine learning pipeline that, once trained on a hydrodynamical simulation, is able to generate both halo mass density maps and the three-dimensional 21 cm brightness temperature signal, starting from a dark matter-only simulation. We use an attention-based ResUNet (HALOgen) to predict dark matter halo maps, which are then processed through a trained conditional variational diffusion model (LODI) to produce 21 cm brightness temperature maps. LODI is trained on smaller sub-volumes that are then seamlessly combined in 512-times larger volume using a new method, called ‘latent overlap’. We demonstrate that, once trained on 253 (Mpc/h)3 volume simulations, we are able to predict the 21 cm power spectrum on an unseen dark matter map (with the same cosmology) to within 10% for wavenumbers k ≤ 10 h Mpc−1, deep inside the non-linear regime, with a computational effort of the order of two minutes. While demonstrated on this specific volume, our approach is designed to be scalable to arbitrarily large simulations.
- New
- Research Article
- 10.1051/0004-6361/202555376
- Nov 18, 2025
- Astronomy & Astrophysics
- Theosamuele Signor + 5 more
Chemical abundance determinations from stellar spectra are challenged by observational noise, limitations in stellar models, and departures from simplifying assumptions. While traditional and supervised machine learning methods have made remarkable progress in estimating atmospheric parameters and chemical compositions within existing physical models, these factors still constrain our ability to fully exploit the vast datasets provided by modern spectroscopic surveys. We aim to develop a self-supervised, disentangled representation learning framework that extracts chemically meaningful features directly from spectra, without relying on externally imposed label catalogs. We built a variational autoencoder-based representation learning model with a physics‐inspired structure comprising multiple decoders, each of which focuses on spectral regions dominated by a particular element, enforcing that each latent dimension maps to a single abundance. To evaluate the potential application of our framework, we trained and validated the model on low-resolution, low signal-to-noise synthetic spectra focusing on ̊m Fe/H ̊m C/Fe and ̊m α/Fe . We then demonstrate how the trained model can be used to flag stars as chemically enhanced or depleted in these abundances based on their position within the latent distribution. Our model successfully learns a representation of spectra whose axes correlate tightly with the target abundances (r=̧orrFE for ̊m Fe/H r=̧orrC for ̊m C/Fe and r=̧orra for ̊m α/Fe ). The disentangled representations provide a robust means to distinguish stars based on their chemical properties, offering an efficient and scalable solution for large spectroscopic surveys.
- Research Article
- 10.3390/rs17223696
- Nov 12, 2025
- Remote Sensing
- Yuanhao Cheng + 4 more
Many multi-target tracking applications (e.g., tracking multiple targets with LiDAR or millimeter-wave radar) are challenged by closely spaced targets. In this work, we propose a method for the tracking of multiple extended targets or unresolvable group targets in such scenarios. The approach builds on the cardinality probability hypothesis density (CPHD) filtering framework for computational efficiency, models the target’s extent with the multiplicative error model (MEM), and uses variational Gaussian mixture model (VGMM)-derived responsibilities to drive probabilistic data association (PDA) measurement updates. This effectively mitigates state fusion between closely spaced targets and yields more accurate state estimation. In experiments on diverse simulated and real datasets, the proposed method consistently outperforms existing approaches, achieving the lowest localization, shape estimation, and cardinality estimation errors while maintaining an acceptable runtime and scalability.
- Research Article
- 10.1007/s10198-025-01870-8
- Nov 12, 2025
- The European journal of health economics : HEPAC : health economics in prevention and care
- Karen V Macdonald + 6 more
Next generation sequencing (NGS) can decrease the diagnostic odyssey for patients with rare diseases. However, valuing the combination of health and non-health outcomes associated with NGS is challenging. While stated preference methods can be used for monetary valuation of outcomes, frameworks that jointly account for both costs and benefits to determine cost-acceptability are limited. Insights into cost-acceptability can help inform pricing and access decisions where competition among NGS alternatives is imperfect or diagnostics are provided as a public good. We used stated preference data to estimate the cost-acceptability of exome sequencing (ES) (i.e., cost at which ES provides value and users are willing-to-pay) in a user-based valuation. We estimated the benefit of ES as an alternative to all other diagnostic tests using a compensating variation model. Based on estimated net-benefit, we determined the proportion of users with positive expected net-benefit for varying cost (CAD$0-$15,000) and chance of diagnosis (10%-90%). We created a cost-acceptability frontier of costs and chance of diagnosis for a range of scenarios. Expected net-benefit and cost-acceptability were estimated for low-cost (CAD$1,600) and high-cost (CAD$11,660) ES scenarios. We find that at least half of users consider costs of up to CAD$10,000 acceptable if the chance of obtaining a diagnosis of ES is at least 50%. However, at least some users are willing to accept a chance of diagnosis below 50%, even if the associated cost are high. Our proposed valuation framework suggests that many potential users of ES are willing to accept various combinations of cost and chance of diagnosis. Cost-acceptability is especially high if the chance of diagnosis is larger the 50%.
- Research Article
- 10.1371/journal.pone.0334756
- Nov 11, 2025
- PLOS One
- Zhengyuan Zhang + 3 more
Multi-visual pattern mining plays an important role in image classification, retrieval, and other fields. A multi visual pattern mining algorithm based on variational inference Gaussian mixture model and pattern activation response graph is introduced to address the issues of insufficient frequency and discriminability faced by traditional algorithms. The innovation of this algorithm lies in combining variational inference Gaussian mixture model with pattern activation response graph. The former solves the limitation of manually presetting the number of modes in traditional methods by determining the optimal number of modes to ensure frequency. The latter improves discriminability by capturing key areas of the image, solving the problem of traditional algorithms being difficult to balance the two and distinguish multiple patterns within the same category. The results showed that in quantitative analysis, the algorithm had a high frequency of 92.81% when the similarity threshold was 0.866 on the Canadian Institute for Advanced Research-10 dataset. On the Travel dataset, the classification accuracy and F1 value were as high as 95.36% and 94.17%, respectively, which were significantly higher than other algorithms. The proposed multi-visual pattern mining algorithm has high frequency and discriminability, which can provide a more comprehensive visual representation and help better mine images of the same category but different visual patterns. This algorithm provides technical support for image classification and retrieval.
- Research Article
- 10.1175/jcli-d-24-0624.1
- Nov 11, 2025
- Journal of Climate
- Adam Michael Bauer + 2 more
Abstract Heat waves are expected to increase in severity and frequency under climate change. Case studies have shown that heat waves typically occur during a coalescence of anomalous atmospheric and land surface conditions, but teasing apart the different contributing factors is a challenge, in part owing to difficulty in disentangling the role of soil moisture from that of atmospheric variations in solar radiation and thermal advection. Here, we provide evidence that low soil moisture is associated with extremely high temperatures in the midlatitudes and develop a theoretical framework to understand this association. We first show that a nonlinear relationship between soil moisture and temperature arises from energy and mass conservation at the land surface, then employ this relationship to quantify the influence of soil moisture on temperature variability. After deriving a diagnostic equation for the nonlinear temperature response to soil moisture variations, we obtain a dynamical Hasselmann-like model for the soil moisture variations themselves. We find that soil moisture fluctuations control the frequency of temperature extremes by slowly altering the land surface climate state on which atmospheric variability is superimposed, rather than by altering atmospheric variability itself. Our diagnostic model allows us to quantify how extreme an atmospheric anomaly needs to be to create a heat wave, conditional on the underlying soil moisture. By forcing our Hasselmann-like model for soil moisture with stochastic precipitation, we derive analytical solutions for the statistical moments of soil moisture.
- Research Article
- 10.3390/electronics14224377
- Nov 9, 2025
- Electronics
- Elfatih A A Elsheikh
In this paper, a novel model of dust storm intensity variations and its effects on earth–satellite link design for total attenuation prediction has been developed. The proposed model expresses the total dust-induced attenuation as a function of the empirically derived specific attenuation in (dB/km) and effective slant path distance. The formulation incorporates the vertical variation in dust storm intensity along the propagation path to more accurately represent the attenuation experienced by slant links. The effective slant path distance is obtained as a combination of a total slant path distance and an adjustment factor. The adjustment factor has been developed based on the visibility height model of the dust storm structure. The proposed model has been validated with one-year measured attenuation on 6.2 km and 7.6 km long microwave links operating at 21.2 GHz and 14.5 GHz, respectively.