Abstract

This study investigates the integration of two prominent neural network representations into a hybrid cognitive model for solving a natural language task, where pre-trained large-language models serve as global learners and recurrent neural networks offer more “local” task-specific representations in the neural network. To explore the fusion of these two types of representations, we employ an autoencoder to transform them between each other or fuse them into a single model. Our exploration identifies a computational constraint, which we term limited diffusibility, highlighting the limitations of hybrid systems that operate with distinct types of representation. The findings from our hybrid system confirm the crucial role of global knowledge in adapting to a new learning task, as having only local knowledge greatly reduces the system’s transferability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call