Abstract

Deep convolutional neural networks (DCNNs) have become the dominant machine learning for visual object recognition. They have been widely used in food image recognition and have achieved excellent performance. However, not only are the food-ingredient datasets not easy to obtain, but also the scale is not big enough to learn a deep learning model. For small-scale datasets, this paper proposes a novel DCNN architecture, which constructs an up-to-date combinational convolutional neural network of double subnets (CBDNet) for automatic classification of food ingredients using feature fusion. The feature fusion is a component which aggregates subnets for more abundant and precise deep feature extraction. In order to improve classification accuracy, some useful strategies are adopted, including batch normalisation (BN) operation and hyperparameters setting. Finally, experimental results show that the CBDNet integrating double subnets, feature fusion and BN operation extracts better image features and effectively improves the performance of food-ingredient recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call