Sort by
Adaptive fuzzy neighborhood decision tree

Decision tree algorithms have gained widespread acceptance in machine learning, with the central challenge lying in devising an optimal splitting strategy for node sample subspaces. In the context of continuous data, conventional approaches typically involve fuzzifying data or adopting a dichotomous scheme akin to the CART tree. Nevertheless, fuzzifying continuous features often entails information loss, whereas the dichotomous approach can generate an excessive number of classification rules, potentially leading to overfitting. To address these limitations, this study introduces an adaptive growth decision tree framework, termed the fuzzy neighborhood decision tree (FNDT). Initially, we establish a fuzzy neighborhood decision model by leveraging the concept of fuzzy inclusion degree. Furthermore, we delve into the topological structure of misclassified samples under the proposed decision model, providing a theoretical foundation for the construction of FNDT. Subsequently, we utilize conditional information entropy to sift through original features, prioritizing those that offer the maximum information gain for decision tree nodes. By leveraging the conditional decision partitions derived from the fuzzy neighborhood decision model, we achieve an adaptive splitting method for optimal features, culminating in an adaptive growth decision tree algorithm that relies solely on the inherent structure of real-valued data. Experimental evaluations reveal that, compared with advanced decision tree algorithms, FNDT exhibits a simple tree structure, stronger generalization capabilities, and superior performance in classifying continuous data.

Just Published
Relevant
CoaddNet: Enhancing signal-to-noise ratio in single-shot images using convolutional neural networks with coadded image effect

Noise in astronomical images significantly impacts observations and analyses. Traditional denoising methods, such as increasing exposure time and image stacking, are limited when dealing with single-shot images or studying rapidly changing astronomical objects. To address this, we developed a novel deep-learning denoising model, CoaddNet, designed to improve the image quality of single-shot images and enhance the detection of faint sources. To train and validate the model, we constructed a dataset containing high and low signal-to-noise ratio (SNR) images, comprising coadded and single-shot types. CoaddNet combines the efficiency of convolutional operations with the advantages of the Transformer architecture, enhancing spatial feature extraction through a multi-branch structure and reparameterization techniques. Performance evaluation shows that CoaddNet surpasses the baseline model, NAFNet, by increasing the Peak Signal-to-Noise Ratio (PSNR) by 0.03 dB and the Structural Similarity Index (SSIM) by 0.005 while also improving throughput by 35.18%. The model significantly improves the SNR of single-shot images, with an average increase of 22.8, surpassing the noise reduction achieved by stacking 70-90 images. By boosting the SNR, CoaddNet significantly enhances the detection of faint sources, enabling SExtractor to detect an additional 22.88% of faint sources. Meanwhile, CoaddNet reduced the Mean Absolute Percentage Error (MAPE) of flux measurements for detected sources by at least 27.74%.

Just Published
Relevant
Joint Optimization of Empty Container Repositioning and Inventory Control Applying Dynamic Programming and Simulated Annealing

Based on the division of public and exclusive hinterlands, this paper studies the joint optimization problem of empty container repositioning and inventory control under non-stationary demand within the port cluster. This paper established a stochastic mixed integer programming model using distributed robust optimization methods, combined with quantitative and periodic inventory control strategies. After conducting a deterministic transformation, this paper designed a hybrid algorithm of dynamic programming and simulated annealing to solve the model, and compared the various costs under stationary and non-stationary scenarios. The results show that the joint optimization of empty container inventory control and repositioning can always reduce the total cost of empty container management for shipping companies. Periodic inventory control strategies are more suitable for shipping companies to apply to empty container management in non-stationary situations. Sensitivity analysis shows that there is a positive correlation between uncertain demand parameters and the total cost of shipping companies. The superiority of the empty container repositioning mode proposed in this paper under sea-land coordination has also been demonstrated by changing the accessibility parameters between terminals.

Just Published
Relevant
From One-dimensional to Multidimensional Map Neural Networks

In the contemporary landscape, artificial neural networks transcend their traditional role of function approximation and have found diverse applications in fields such as image classification, machine translation, speech recognition, and natural language processing. In some datasets, traditional architectures exhibit low test and training accuracy, coupled with high loss and prolonged training times. This study aims to introduce innovative neural network architectures that outperform conventional models. The research presents a novel framework integrating one-dimensional and multidimensional map neural networks, consisting of three distinct architectures: 1D-Map, 2D-Map, and 3D-Map. A systematic performance comparison with traditional models is conducted following the implementation of these architectures. The evaluation spans four datasets, encompassing the domains of heat treatment of electroless Ni-P nano coatings, letter recognition, combined cycle power plants, and Seoul bike sharing demand. In the heat treatment dataset, the proposed 3D-Map architecture reached 0.055 higher average test accuracy than traditional MLP architecture. In the letter recognition dataset, the test accuracy of 3D-Map architecture was 0.0523 higher than the test accuracy of the LSTM architecture. In the combined cycle power plant dataset, the test accuracy of the 3D-Map architecture was 0.0612 more than the test accuracy of the MLP architecture. In the Seoul bike-sharing demand dataset, the test accuracy of the 2D-Map architecture was 0.0696 higher than the test accuracy of the LSTM architecture. The study's findings underscore the consistently superior performance of the proposed architectures compared to their traditional counterparts.

Just Published
Relevant
Multiscale Spatio-Temporal Feature Fusion Based Non-Intrusive Appliance Load Monitoring for Multiple Industrial Industries

The appliance types and power consumption patterns vary greatly across different industries. This can lead to unstable identification results of traditional appliance load monitoring methods in different industries. A non-intrusive appliance load monitoring (NIALM) method for multiple industries based on multiscale spatio-temporal feature fusion has been proposed. Firstly, the ConvNeXt Block with efficient channel attention has strong feature extraction capability. Spatial features of appliance state changes and micro-variations generated during operation can be extracted from mixed industrial load information by it. Meanwhile, the bidirectional gated recurrent neural network is used to learn the bidirectional dependencies of the load data, obtaining temporal features. Then, the multi-scale feature extraction module is used to extract temporal and spatial features from different depths of the network layers. And the extracted multi-scale temporal and spatial features are fully integrated. Finally, the proposed model is optimized using the Stochastic Weight Averaging method. During the training process, a certain number of model weights are randomly averaged, which can improve the model's generalization ability and identification accuracy. The experiment was conducted on six different industries. The evaluation indexes such as accuracy, F1 score, and Wasserstein distance are also used to verify the effectiveness and superiority of the method.

Just Published
Relevant
Multi-objective optimization and multi-attribute decision-making support for optimal operation of multi stakeholder integrated energy systems

To efficiently tackle the optimal operation problem of multi-stakeholder integrated energy systems (IESs), this paper develops a multi-objective optimization and multi-attribute decision-making support method. Mathematically, The optimal operation of IESs interconnected with distributed district heating and cooling units (DHCs) via the power grid and gas network, can be formulated as a multi-objective optimization problem considering both economic, reliability and environment-friendly objectives with numerous constraints of each energy stakeholder. Firstly, a multi-objective group search optimizer with probabilistic operator and chaotic local search (MPGSO) is proposed to balance global and local optimality during the random search iteration. The MPGSO utilizes a crowding probabilistic operator to select producers to explore areas with higher potential but less crowding and reduce the number of fitness function calculations. Moreover, a new parameter selection strategy based on chaotic sequences with limited computational complexity is adopted to escape the local optimal solutions. Consequently, a set of superior Pareto-optimal fronts could be obtained by the MPGSO. Subsequently, a multi-attribute decision-making support method based on the interval evidential reasoning (IER) approach is used to determine a final optimal solution from the Pareto-optimal solutions, taking multiple attributes of each stakeholder into consideration. To verify the effectiveness of the MPGSO, the DTLZ suite of benchmark problems are tested compared with the original GSOMP, NSGA-II and SPEA2. Additionally, simulation studies are conducted on a modified IEEE 30-bus system connected with distributed DHCs and a 15-node gas network to verify the proposed approch. The quality of the obtained Pareto-optimal solutions is assessed using a set of criteria, including hypervolume (HV), generational distance (GD), and Spacing index, among others. Simulation results show that the number of Pareto-optimal solutions (NPS) of MPGSO are higher by about 32.6 %-62.1%, computation time (CT) are lower by about 2.94 %-46.1 % compared with other algorithms. Besides, to further evaluate the performance of the proposed approach in addressing larger-scale issues, the study employs the modified IEEE 118-bus system of greater magnitude. The proposed MPGSO algorithm effectively handles multi-objective and non-convex optimization problems with Pareto sets in terms of better convergence and distributivity.

Just Published
Relevant
Heterogeneous Graph Neural Network with Hierarchical Attention for Group-Aware Paper Recommendation in Scientific Social Networks

In recent years, the academic groups established in Scientific Social Networks (SSNs) have not only facilitated collaboration among researchers but also enriched the relations in SSNs, providing valuable information for paper recommendation tasks. However, existing paper recommendation methods rarely consider group information and they fail to fully leverage the group information due to the heterogeneous and complex relations between researchers, papers, and groups. In this paper, a heterogeneous graph neural network with hierarchical attention, named HHA-GPR, is proposed for group-aware paper recommendation. Firstly, a heterogeneous graph is constructed based on the interactions of researchers, papers, and groups in SSNs. Secondly, a random walk-based sampling strategy is utilized to sample highly correlated heterogeneous neighbors for researchers and papers. Thirdly, a hierarchical attention network with intra-type and inter-type attention mechanisms is designed to aggregate the sampled neighbors and comprehensively model the complex relations among the heterogeneous neighbors. More specifically, an intra-type attention mechanism is introduced to aggregate the neighbors of the same type, and an inter-type attention mechanism is employed to combine the embeddings of different types to form the ultimate node embedding. Extensive experiments are conducted on the real-world CiteULike and AMiner datasets, and the experimental results demonstrate that our proposed method outperforms other benchmark methods with an average improvement of 5.3% in Precision, 5.6% in Recall, and 5.1% in Normalized Discounted Cumulative Gain (NDCG) across both datasets.

Just Published
Relevant
M-Net based Stacked Autoencoder for Ransomware Detection using Blockchain Data

Ransomware is a kind of malevolent program software that encrypts the items on the hard disc and prevents the clients from accessing them until they are paid a ransom. Associations like monetary establishments and medical care areas (i.e., smart medical care) are mostly targeted by ransomware attacks. Ransomware assaults are crucial holes still in blockchain technology and prevent effective data communication in networks. This study aims to introduce an efficient system, named M-Net-based Stacked Autoencoder (M-Net_SA) for ransomware detection using blockchain data. Initially, the input data is taken from a dataset and then sent to the feature extraction process, which utilizes sequence-based statistical features. After that, data transformation is completed using the Yeo-Johnson transformation to transform the data into a usable format. After that, feature fusion is executed using a Deep Q-network (DQN) with Lorentzian similarity to enhance the representativeness of the target features. Finally, ransomware detection is accomplished by the proposed M-Net_SA, which is the integration of MobileNet and Deep Stacked Autoencoder (DSAE). The experimental validation of the proposed M-Net_SA is compared with other conventional techniques and the proposed model attained maximum accuracy, sensitivity, and specificity of 0.959, 0.967, and 0.957 respectively.

Just Published
Relevant
A switching based forecasting approach for forecasting sales data in supply chains

Forecasting future demand has been a challenging task for supply chain practitioners, which is further exacerbated due to the recent pandemic effects. While the literature suggests a potential for improved accuracy with ML/AI approaches compared to probabilistic distribution-based traditional forecasting methods, the extent of this enhancement may vary based on the specific case. It is recognized that traditional probabilistic forecasting approaches are often considered less accurate and may lead to errors, potentially influencing the estimation of overall business costs. Meanwhile, with the advancement of artificial intelligence (AI) approaches, such as machine learning (ML) and deep learning (DL), this misestimation of cost can be reduced by forecasting demand more accurately from historical data. Consequently, this paper applies several AI-based approaches to predict demand data. Since no fixed AI approach works best for all datasets, a switching-based forecasting approach (SBFA) is proposed to exploit the merit of different advanced ML/DL approaches for different days ahead of prediction. Based on the performance of validation data, the proposed system automatically switches between different approaches to determine a more appropriate forecasting approach. A two-echelon supply chain model with different attributes is developed to validate the proposed SBFA against a few traditional forecasting approaches. The reorder points of this supply chain model are calculated based on the predictions from conventional/ML/DL forecasting approaches. Predictions from SBFA and other approaches are analysed by calculating overall supply chain cost. Based on overall supply chain costs under static and dynamic lead time settings, the effectiveness and applicability of the proposed SBFA against traditional forecasting approaches are demonstrated.

Just Published
Relevant
A Fermatean Fuzzy SWARA-TOPSIS Methodology based on SCOR model for Autonomous Vehicle Parking Lot Selection

Population growth in crowded cities and the resulting increase in vehicle use have led to the problem of insufficient parking. When public parking lots and urban growth are not in coordination, vehicles park on the street and close the crosswalks. In the coming years, this problem will become more complicated with the addition of autonomous vehicles (AVs) to urban traffic. This study addresses the research question of how to effectively select AV parking lots in urban areas experiencing population growth and increased vehicle usage. For this aim, a hybrid Multi-Criteria Decision Making (MCDM) methodology, combining SWARA (Step-wise Weight Assessment Ratio Analysis) and TOPSIS (Technique for Order Preference by Similarity) approaches in a Fermatean Fuzzy (FF) environment is proposed. The decision hierarchy based on the SCOR model has been developed to determine and construct the evaluation criteria. Then, a case study analysis has been applied to selected districts in Istanbul, which is Turkiye's most populous and developing city. Operating expenses, safety and security, and land costs are determined as the most important factors. As a result of the detailed fuzzy analysis, which districts should primarily be chosen for AV parking lots in Istanbul is determined and finally, the robustness and validity of the results obtained by the sensitivity analysis being questioned. The study contributes by providing insights into AV parking lot selection, demonstrating the efficacy of the proposed methodology, and highlighting the importance of addressing this issue in urban planning.

Just Published
Relevant