Abstract

Research on the problem of learning large-scale fuzzy cognitive maps (FCMs) with a limited computational budget is outstanding. To learn large-scale FCMs from time series, in most work, this problem is decomposed into learning local connections of each concept, respectively, and then one optimizer is employed to optimize each such sub-problem. Each sub-problem may have different requirements for the computational resource, but the existing methods ignore this issue and allocate the same amounts of computational resources for each sub-problem. In this paper, we propose two strategies to address this problem. We first develop a dynamic resource allocation strategy to maximize the performance of the decomposition-based optimizer under a limited computational budget. Second, we propose a half-thresholding memetic algorithm to improve the performance of the traditional evolutionary algorithm. We term our proposal as a half-thresholding memetic algorithm with a dynamic resource allocation strategy (HTMA-DRA). Finally, the experiments on large-scale synthetic data and DREAM datasets compared with the existing state-of-the-art methods demonstrate the effectiveness of the proposed HTMA-DRA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call