Abstract
Catastrophic forgetting occurs when learning algorithms change connections used to encode previously acquired skills to learn a new skill. Recently, a modular approach for neural networks was deemed necessary as learning problems grow in scale and complexity since it intuitively should reduce learning interference by separating functionality into physically distinct network modules. However, an algorithmic approach is difficult in practice since it involves expert design and trial and error. Kashtan et al. finds that evolution under an environment that changes in a modular fashion leads to the spontaneous evolution of a modular network structure. In this paper, we aim to solve the reverse problem of modularly varying goal (MVG) to obtain a highly modular structure that can mitigate catastrophic forgetting so that it can also apply to realistic data. First, we confirm that a configuration with a highly modular structure exists by applying an MVG against a realistic dataset and confirm that this neural network can mitigate catastrophic forgetting. Next, we solve the reverse problem, that is, we propose a method that can obtain a highly modular structure able to mitigate catastrophic forgetting. Since the MVG-obtained neural network can relatively maintain the intra-module elements while leaving the inter-module elements relatively variable, we propose a method to restrict the inter-module weight elements so that they can be relatively variable against the intra-module ones. From the results, the obtained neural network has a highly modular structure and can learn an unlearned goal faster than without this method.
Highlights
Learning a variety of different skills for different problems is a long-standing goal in artificial intelligence
We assumed learning a single goal as a process that involves catastrophic forgetting of the previous goal, we present an example of a neural network learning two goals sequentially in the fixed goal (FG) manner (Appendix 3)
Conclusions and future works In this paper, we aimed to solve the reverse problem of modularly varying goal (MVG) to obtain a highly modular structure that can mitigate catastrophic forgetting so that it can apply to realistic data
Summary
Learning a variety of different skills for different problems is a long-standing goal in artificial intelligence. When neural networks learn a new skill, they typically lose previously acquired skills. This problem, called catastrophic forgetting, occurs because learning algorithms change connections used to encode previously acquired skills to learn a new skill (Ellefsen et al 2015; Goodfellow et al 2014; Kemker et al 2018). A modular approach for neural networks has been deemed necessary as learning problems grow in scale and complexity (Amer and Maul 2019). This modular approach, which involves constructing a network with densely connected modules with only sparser connections between the modules (Newman 2006), intuitively should
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.