Sort by
The weight fuzzy judgment method for the benchmarking sustainability of oil companies

The environmental, social, and economic challenges associated with the massive activities of the oil and gas industry require analysis and evaluation of companies' sustainability priorities. Evaluating the environmental performance of oil companies requires the application of strict mathematical tools and models that provide rigorous results. Multi-criteria decision-making (MCDM) is a rigorous tool that evaluates the performance and practices of oil and sustainable energy companies. However, classical multi-criteria decision analysis has problems related to inconsistency because of the subjective nature of pairwise comparisons. This study responded to the literature call by utilizing the weight fuzzy judgment method (WFJM) to determine the weight coefficients of criteria with zero consistency. Such a method employed criteria values and expert judgment for decision-making to create a bridge between the mathematical approach and the human approach. This study aims to develop a solid decision support system by evaluating and benchmarking 11 companies utilizing the fuzzy judgment method based on fuzzy theory and the VIekriterijumsko KOmpromisno Rangiranje (VIKOR) method to determine sustainability priorities for oil and gas companies. The oil companies’ criteria were efficiently weighted by the fuzzy judgment method, and the organizational capabilities represented the greatest weight value of 0.128, whilst the low-weight criteria were the high costs of technologies at 0.080. Besides, the benchmarking results for oil companies revealed that COM 8 was the best, whilst COM 11 was the worst. The fuzzy judgment technique is a unique approach to solving complicated decision-making problems, and its influence can be significant. Furthermore, this method can be applied in sectors other than energy, including banking, medical services, and engineering, where decision-making is not straightforward and requires a systematic approach.

Just Published
Relevant
Decoding the scientific creative-ability of subjects using dual attention induced graph convolutional-capsule network

There is an increasing demand of creative individuals in scientific research, innovation sectors of software industries and industrial research/development sectors. On the other hand, there are requirements of analytical minded people in investigation departments, academia and management sectors. Unfortunately, classification of individuals into creative and analytical categories based on their behavioral response is not easy. This paper makes an attempt to classify people into 4 categories: Analytical, High Creative, Medium Creative and Low Creative from their brain response during their participation in a creativity-test based on convergent problems. The proposed classification problem involves two main phases. In the first phase, a brain connectivity map is constructed from the electroencephalogram (EEG) response of the brain using Pearson’s correlation technique. In the second phase, a set of three centrality features, namely degree, closeness and betweenness are extracted from the connectivity map and fed to a classifier model for categorizing the afore-said class labels. The classifier model here synergistically combines one Graph Convolution Network (to abstract the brain connectivity-based centrality features) and one Capsule Network (to undertake the classification task) to develop the proposed Dual Attention Induced Graph Convolutional-Capsule network (DAIGC-CapsNet). The novelty of the proposed classifier model lies in the dual attention module and a new routing algorithm. The dual attention modules includes a) a Mish Induced Attention Module (MI-AM) to guide the graph convolution layers to focus on the most significant node attributes, and b) a Fused Attention Module (F-AM) to ensure the transmission of the most relevant predictions from the primary capsule to the class capsule layers. The latter attention module combines the effects of two sub-modules (channel and spatial) that concentrate on determining "what" and "where" to prioritize within the channel and spatial dimensions of the primary capsules. Lastly, the coupling between the primary and class capsule layers is strengthened by a Sparsemax based routing algorithm. Experiments conducted yield fruitful and definitive outcomes that substantiate the effectiveness of the proposed framework with respect to its conventional counterparts. Moreover, statistical validation of the proposed classifier using Friedman’s test also proves its efficacy compared to its competitors.

Just Published
Relevant
Image inpainting based on GAN-driven structure- and texture-aware learning with application to object removal

In this paper, a novel deep learning-based image inpainting framework consisting of restoring image structure and reconstructing image details from corrupted images is proposed. Most image inpainting methods in the literature aim at restoring image details, outlines, and colors, simultaneously, which may suffer from blurring, deformation, and unreasonable content recovery due to interference among various information. To solve these problems, a two-stage image inpainting deep neural network based on GAN (generative adversarial network) architecture is proposed. The proposed inpainting framework consists of two modules: (1) the first stage, called the structure-aware learning stage, aims at learning a GAN-based structure restoration network, focusing on recovering the low-frequency image component, including colors and outlines of the missing regions of the input corrupted image; and (2) the second stage, called the texture-aware learning stage, aims at learning a GAN-based detail refinement network, focusing on rebuilding the high-frequency image details and texture information. In particular, we also propose to remove details from the training images to better train the structure restoration network to avoid inadequate image structure recovery induced by richer image textures, where the detail reconstruction task is left to the second stage. This strategy achieves to balance the workload between the two stages and the image quality can be progressively enhanced through the two stages. Experimental results have shown that the proposed deep inpainting framework quantitatively and qualitatively achieves state-of-the-art performance on the well-known datasets, including the CelebA, Places2, and ImageNet datasets, compared with existing deep learning-based image inpainting approaches. More specifically, in terms of the two well-known image quality assessment metrics, PSNR (peak signal-to-noise ratio) and SSIM (structural similarity), the improvement percentage of the proposed method, compared with the baseline approach, respectively, ranges from 3.23 % to 11.12 %, and 1.95 % to 13.39 %. The improvements have been shown to stably and significantly outperform the compared state-of-the-art methods in most types of inpainting mask. We also show that the proposed method is applicable to image editing in object removal from a single image.

Relevant
A simulation-based genetic algorithm for a semi-automated warehouse scheduling problem with processing time variability

For warehouse operations, efficiently scheduling the available resources is crucial to improve the productivity and customer satisfaction. This paper proposes a simulation-based evolutionary algorithm for order scheduling and multi-robot task assignment in a robotic mobile fulfillment system. The algorithm proactively deals with the effects of the processing time variability by evaluating schedules based on both its system performance as well as its robustness under uncertain conditions. The algorithm implements an efficient resource allocation method and a variance reduction technique to reduce the overall computational burden. The experimental results show that the techniques to reduce the computational time are effective and can significantly reduce the amount of simulations required for the fitness evaluation. If a candidate schedule is allocated insufficient simulation replications it can lead to an inaccurate estimate of its long-term average performance. This could lead to an average performance loss of 7.3 %. Furthermore, the proactive scheduler is able to generate schedules that are more robust compared to deterministically generated. A reduction in the average operational cost of about 5 % can be reached, compared to a deterministically generated schedule. The paper reveals the relevance of identifying and modeling uncertainty when designing schedules in an operational system, rather than looking for optimal schedules for ideal scenarios.

Relevant
Effective anti-submarine decision support system based on heuristic rank-based Dijkstra and adaptive threshold partitioning mechanism

Submarines possess strong covert striking capabilities, making anti-submarine warfare (ASW) a global naval priority. The Hidden Markov Anti-Submarine Model (HMASM) finds crucial applications in dynamic and uncertain ASW. The model delineates ASW into two phases: partitioning and search path planning. Current partitioning algorithms often include cells leading to redundant and competitive search, determined empirically. Additionally, in HMASM, the path planning algorithms based on genetic algorithm (GA) are time-consuming and exhibits unstable performance. This paper proposes a novel solution to these issues. For partitioning, a Reassigned k-Nearest Neighbour (RKNN) algorithm is introduced, identifying and reallocating units causing repeated and competing searches. For search path planning, heuristic rules for Dijkstra's cost function and goal point selection transform the NP-hard problem of maximizing the expected number of detections (ED) into a deterministic algorithm-compatible form. A heuristic rank-based selection model, Rank-Based Dijkstra under the Hidden Markov Model (HMM-R-Dijkstra), considering distance and probability, is added to Dijkstra. Furthermore, an Adaptive Threshold Partitioning (ATP) dynamically monitors searcher exploration by setting variables and thresholds to determine optimal partitioning timing, preventing untimely and excessive partitioning. Combining RKNN, HMM-R-Dijkstra, and ATP forms R-HRD-ATP, optimizing all parameters using parallel structures. Through three comparative experiments, R-HRD-ATP's performance steadily improves. Experiments comparing R-HRD-ATP to GA, Sparrow Search Algorithm (SSA)-GA, and Ant Colony Optimization (ACO)-GA reveal performance enhancements of 25–70.99% and time savings of 56–295 times for our model. Importantly, no parameter adjustments are required. The success of R-HRD-ATP indicates that its heuristic rules can support the establishment of robust applications of deterministic path planning algorithms in HMASM.

Relevant
Joint learning strategy of multi-scale multi-task convolutional neural network for aero-engine prognosis

Remaining useful life (RUL) prediction and health status (HS) assessment are two key tasks in aero-engine prognostics and health management (PHM) system. However, existing deep learning-based prognostic models perform RUL prediction and HS assessment tasks separately, without considering the correlation between these two tasks. Secondly, traditional deep learning can only extract single-scale features, which limits the ability to extract complex degradation features from high-dimensional condition monitoring data. Therefore, this work proposes a multi-scale and multi-task convolutional neural network for joint learning of aero-engine RUL prediction and HS assessment. Firstly, multi-sensor data with multiple cycles are converted into image samples to integrate more condition monitoring information that is beneficial to prognosis. Then, the multi-scale feature fusion block is designed as the shared network for multi-task, utilizing convolutional layers with filters of different sizes to enhance the ability to extract complex degradation features from high-dimensional condition monitoring data. And a multi-layer concatenation block is constructed to integrate multi-scale features at different levels to fully utilize the important information at different levels. On this basis, a multi-task joint learning block is constructed and a joint loss function is developed for joint learning of RUL prediction and HS assessment. Finally, experiments on two engine degradation datasets, CMAPSS and N-CMAPSS, demonstrate that the proposed network has excellent RUL prediction and HS assessment performance, and outperforms other state-of-the-art methods.

Relevant
Processing 2D barcode data with metaheuristic based CNN models and detection of malicious PDF files

Portable Document Format (PDF) is a file format created to create portable and printable documents across platforms. PDF files are one of the most widely used application types in computer-based systems. Thanks to the functionality that PDF files provide, they are used by many users around the world. Malware developers can exploit PDF files due to various factors. Malware can integrate embedded files, JavaScript, PDF files, etc. As a result, PDFs are susceptible to security vulnerabilities in computer-based systems. In this study, we utilised the CIC-Evasive-PDFMal2022 dataset, made accessible by the Canadian Cybersecurity Institute in 2022, that includes two categories, namely benign and malicious. In the preprocessing step, the proposed model transformed text-based PDF parameter data into the 2D PDF417 barcode. 2D Convolutional Neural Network (CNN) models (MobileNetV2, ResNet18, and ShuffleNet) are trained using the dataset generated by the preprocessing step. CNN is a type of artificial neural network used in image recognition, processing, and classification. Type/class based feature sets were then obtained by each CNN model. In the last step, the metaheuristic optimization method (Honey Badger Algorithm) was used. Thanks to this method, the best performing feature set was determined among the feature sets of the types extracted from each CNN model. It was then classified by the softmax method, and an overall accuracy of 99.73% was achieved. The proposed approach has successfully trained 1D data with 2D CNNs. In addition, with the barcode imaging technique, direct understanding of the data by the users is prevented.

Relevant