- New
- Research Article
- 10.13052/jrss0974-8024.1828
- Nov 25, 2025
- Journal of Reliability and Statistical Studies
- Yunish Khan + 1 more
The emergence of soybean diseases represents a formidable obstacle to the sustainable progression of the rapidly evolving soybean industry, which endeavors to achieve elevated productivity and enhanced crop excellence. Prompt and accurate disease prognostication is imperative for efficacious management protocol, as it contributes to limiting pathogen proliferation. This investigation examined the impact of various meteorological factors on the incidence of Soybean Yellow Mosaic Virus (SYMV) in Pantnagar, Uttarakhand. Six multivariate frameworks were assessed, encompassing Stepwise Multiple Linear Regression (SMLR), Artificial Neural Network (ANN) and Elastic Net (ELNET), employing both original climatic parameters and calculated weather indices to predict disease severity. For model construction, 80% of the dataset was allocated for training purposes while the remaining 20% was reserved for validation procedures. Among all examined frameworks, the ANN model incorporating weather indices (ANN-WI) exhibited exceptional predictive performance, attaining a Normalized Mean Square Error (nRMSE) of only 3.08% and an R2 coefficient of 0.99 during the calibration phase. Based on performance metrics, the frameworks were ranked as follows: ANN-WI ≈ ANN-W > ELNET-W > ELNET-WI > SMLR-WI > SMLR-W. The results definitively demonstrate that ANN-based frameworks, especially those incorporating weather indices, significantly exceeded alternative modeling techniques within the investigated area.
- New
- Research Article
- 10.13052/jrss0974-8024.1827
- Nov 13, 2025
- Journal of Reliability and Statistical Studies
- Manisha Verma + 2 more
Road accidents have been among the leading reasons for injury and death globally; this study aims to develop a set of rules that Indian road traffic and safety agencies can specifically utilize to find out the potential reasons leading to accident severity. In this study we used R software to establish classification models -Multi-Layer Perceptron (MLP), Naive Bayes, decision trees, and logistic regression that can accurately predict the severity of injuries. By adopting 2585 road accident records in India from 2013 to 2018, our analysis reveals that the overall accuracy of logistic regression 87.47%, decision tree 91.21%, and MLP 86.05% in predicting injury severity However, Naive Bayes model demonstrated lower accuracy at 75.57% compared to the other algorithms. Finally, to identify the significant factors influencing accident severity, we have further explored the rules by the decision tree algorithm, and based on the findings, highlighted rules and focus areas to reduce the accident severity
- New
- Research Article
- 10.13052/jrss0974-8024.1826
- Nov 13, 2025
- Journal of Reliability and Statistical Studies
- Madhvendra Pratap Singh + 1 more
This paper discusses the adoption of the meta UTAUT model incorporating trust. The model consists of PE, EE, SI, FC, and trust as independent variables, attitude as a mediating variable, while BI and UB are the dependent variables. A sample size of 279 users drawn from urban and rural settings was conducted. The suggested model was experimented through structural equation modelling (SEM). The outcome of this study indicates that BI is positively affecting the UB and becoming the strongest predictor of it. Importantly, trust is also positively influencing BI and UB. FC and EE play vital roles in building the attitude of the user. EE, PE, and BI-even SI-found not so strong in predicting BI. BI as the strong predictor came in the finding so it can contribute in predicting the behaviour of the user. The study focused more on the role of trust and user experience.
- Research Article
- 10.13052/jrss0974-8024.1825
- Oct 15, 2025
- Journal of Reliability and Statistical Studies
- Maryam Ibrahim Habadi + 3 more
Global warming is among the most pressing environmental challenges, mainly driven by human-induced greenhouse gas emissions. Accurate forecasting of global temperature anomalies is essential for understanding climate trends and planning effective interventions. This study utilizes historical temperature anomaly data from 1940 to 2023. Aiming to compare the forecasting performance of several statistical and machine learning models: Seasonal Autoregressive Integrated Moving Average, Triple Exponential Smoothing, Temporal Convolutional Networks, Long Short-Term Memory, and two hybrid models, SARIMA-LSTM and SARIMA-TCN. Forecast accuracy was evaluated using Mean Squared Error, Mean Absolute Error, and Root Mean Squared Error. The TCN model demonstrated superior forecasting performance, achieving the lowest error across all metrics, followed by the SARIMA-LSTM hybrid model. The results support the combination of statistical and deep learning models for improved climate forecasting and offer valuable insights into future temperature trends amid global warming.
- Research Article
- 10.13052/jrss0974-8024.1824
- Sep 10, 2025
- Journal of Reliability and Statistical Studies
- Prayas Sharma + 3 more
In this article, we propose a new family of estimators for estimating the unknown population median of a study variable by utilizing auxiliary information under simple random sampling. The choice of the median, as opposed to the mean, is particularly advantageous in the presence of outliers or skewed distributions, where the mean may be unduly influenced. We derive the expressions for the bias and mean square error (MSE) of the proposed class of estimators up to the first order of approximation. Furthermore, we examine several notable subclasses within the proposed family and calculate their respective MSEs. To assess the efficiency and robustness of the proposed estimators, an empirical study is conducted using real-world data and benchmarked against existing estimators from the literature. The results of this empirical analysis demonstrate that the proposed estimators achieve lower MSEs, underscoring their practical relevance and effectiveness in survey sampling applications.
- Research Article
- 10.13052/jrss0974-8024.1823
- Aug 9, 2025
- Journal of Reliability and Statistical Studies
- Deepika Gaur + 4 more
The available testing resources are usually restricted during the software testing process. It’s usual to presume that these limitations are deterministic when discussing optimal control models. This isn’t true in practice. For instance, a budget for the use of testing resources could be the first step in the testing process. However, it is possible that fault detection is accelerated during the fault testing process to meet unexpectedly high fault counts, which calls for additional funding for testing resources. These enhanced numbers are obviously unknown in nature. In this case, fuzzy set theory is a plausible model in sense of degree of uncertainty and hence the testing resource expenditure constraints i.e. total budget becomes imprecise in nature. By integrating fuzzy logic into the budgetary framework, this approach offers flexibility in decision-making, enabling more realistic and adaptable strategies. The aim of this research is to look into an optimal way to allocate testing resources in order to minimize software costs during the testing and operation phases while taking independent and dependent faults into account. The model is formulated as an optimal control problem for where the budget constraint is expressed as necessity and /or possibility type. The proposed problem is solved for an optimal testing effort policy employing Pontryagin’s Maximum Principle. A numerical example is presented to support the theoretical optimal control model. The values of the optimal testing effort expenditure function are displayed in both tabular and graphic forms.
- Research Article
- 10.13052/jrss0974-8024.1822
- Jul 31, 2025
- Journal of Reliability and Statistical Studies
- Muhammad Saleem + 1 more
The gamma distribution is well-known for its wide range of applications across various fields. Traditional gamma distributions and their associated algorithms have been used to model imprecise data; however, they face limitations in addressing uncertainty and indeterminacy. To overcome these challenges, this study introduces the neutrosophic gamma distribution, an extension of the classical gamma distribution that incorporates neutrosophic random variables to better handle imprecision and indeterminacy. Basic properties of the neutrosophic gamma distribution are presented, along with algorithms designed to generate data under different levels of indeterminacy. Simulation results reveal that as the degree of indeterminacy increases, the corresponding random variates tend to exhibit an upward trend. Comparative analysis with classical statistics highlights the significant effect of indeterminacy on data generation. Overall, the study demonstrates that the degree of indeterminacy plays a crucial role in shaping the behavior of data derived from the gamma distribution.
- Research Article
- 10.13052/jrss0974-8024.1821
- Jul 15, 2025
- Journal of Reliability and Statistical Studies
- Muhammad Azeem + 4 more
Survey statisticians employ randomized response techniques (RRT) to gather data from the respondents. From time to time, researchers make modifications to the existing methods, with the aim to achieve some sort of improvement over the previous methods. The improvement may be in terms of the privacy levels or model-efficiency, or both. In this paper, we introduce an efficient quantitative randomized response technique which provides efficient estimates of the finite population mean. Moreover, the unified quantitative measure under the new suggested technique is also observed to be smaller than the competitor models. A practical data collection example using the new suggested technique is also provided to illustrate its real-world application. Our findings suggest the new technique performs better than the competitor techniques in efficiency as well as in respondents’ privacy level. Besides empirical results, we have also conducted a simulation study to show the improved performance. The comparative analysis reveals that our proposed technique is appropriate for implementation in real-world sample surveys.
- Research Article
- 10.13052/jrss0974-8024.18110
- Jun 17, 2025
- Journal of Reliability and Statistical Studies
- Pardeep Kumar + 2 more
The presented work explores the performance analysis of a Seabin, which is used for automated water cleaning, through Markov decision process and mathematical modelling. With the rapid increase in waste disposal into water bodies, the need for automated systems to maintain cleanliness without human intervention becomes essential. Seabin, installed at various locations, rely on the proper working of critical components, including the catch bag, pump, outer shell, filter, and power source. This paper develops a state-based mathematical model to evaluate the key parameters such as component wise reliability, mean time to first failure (MTTF) and sensitivity analysis for system’s MTTF. Performing sensitivity analysis recognizes the behaviors of different components failure on the overall performance of the system. Finally, on the basis of obtained results author provide valuable recommendations to enhance the efficiency and reliability of the Seabin.
- Research Article
- 10.13052/jrss0974-8024.1819
- Jun 2, 2025
- Journal of Reliability and Statistical Studies
- Narinder Pushkarna + 1 more
In this paper, we introduce a new extension within the realm of statistical distributions, presenting the “length-biased Ishita distribution.” This distribution stands out as part of the esteemed category of weighted distributions, particularly the length-biased variation. Through meticulous analysis, we explore the mathematical and statistical properties of this novel distribution and reveal its distinct characteristics. Using the robust methodology of maximum likelihood estimation, we accurately estimate the model parameters, enhancing our understanding of its behavior. To demonstrate the practical utility and advantages of the length-biased Ishita distribution, we apply it to a real-world temporal dataset. This empirical analysis highlights its superior performance and adaptability, offering valuable insights into its potential applications across various domains.