Abstract

This study presents the concept of transfer learning (TL) to the chemometrics community for updating DL models related to spectral data, particularly when a pre-trained DL model needs to be used in a scenario having unseen variability. This is the typical situation where classical chemometrics models require some form of re-calibration or update. In TL, the network architecture and weights from the pre-trained DL model are complemented by adding extra fully connected (FC) layers when dealing with the new data. Such extra FC layers are expected to learn the variability of the new scenario and adjust the output of the main architecture. Furthermore, three approaches of TL were compared, first where the weights from the initial model were left untrained and the only the newly added FC layers could be retrained. The second was when the weights from the initial model could be retrained alongside the new FC layers. The third was when the weights from the initial model could be re-trained with no extra FC layers added. The TL was shown using two real cases related to near-infrared spectroscopy i.e., mango fruit analysis and melamine production monitoring. In the case of mango, the model needs to be updated to cover a new seasonal variability for dry matter prediction, while, for the melamine case, the model needs to be updated for the change in the recipe of the production material. The results showed that the proposed TL approaches successfully updated the DL models to new scenarios for both the mango and melamine cases presented. The TL performed better when the weights from the old model were retrained. Furthermore, TL outperformed three recent benchmark approaches to model updating. TL has the potential to make DL models widely useable, sharable, and scalable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call