Abstract

Hand-hygiene is a critical component for safe food handling. In this paper, we apply an iterative engineering process to design a hand-hygiene action detection system to improve food-handling safety. We demonstrate the feasibility of a baseline RGB-only convolutional neural network (CNN) in the restricted case of a single scenario; however, since this baseline system performs poorly across scenarios, we also demonstrate the application of two methods to explore potential reasons for its poor performance. This leads to the development of our hierarchical system that incorporates a variety of modalities (RGB, optical flow, hand masks, and human skeleton joints) for recognizing subsets of hand-hygiene actions. Using hand-washing video recorded from several locations in a commercial kitchen, we demonstrate the effectiveness of our system for detecting hand hygiene actions in untrimmed videos. In addition, we discuss recommendations for designing a computer vision system for a real application.

Highlights

  • This paper presents an exploration to build a video-analytics system for hand-hygiene assessment in an open-room environment

  • While hand hygiene is important in hospital and health care settings as well as in our daily life, here we focus on hand hygiene for safe food handling, where it is a critical component [1] to ensure a healthy food supply

  • We focus in this paper on three main strategies for Unsupervised Domain Adaptation (UDA) in our cross-scenario handhygiene application: defining a common action set across all scenarios of interest, selecting a spatial region of interest (ROI) to process instead of processing the entire image, and building a robust feature representation

Read more

Summary

Introduction

This paper presents an exploration to build a video-analytics system for hand-hygiene assessment in an open-room environment. Convolutional neural network (CNN) models, including AlexNet [21], VGGNet [22], and ResNet [23], perform well for an image classification task, and can be used as a spatial feature extractor for video processing. The use of 3D CNNs, which process video as fixed size input volumes has been applied to action recognition, including C3D [28], I3D [29], T3D [30], and P3D [31] Due to their structure, these systems do not lend themselves toward flexible designs; they require an input with fixed temporal length and have many parameters to train

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.