Abstract
Accurate camera-based human action recognition over longer periods of time or in different camera views requires re-identification of individuals to correctly associate the actions. This is especially important if you want to track people's actions over time. Most work in person re-identification currently focuses on improving the performance of re-identification models for images of people wearing everyday clothing. This becomes a problem when the re-identification scenario changes, and with it the typical appearance of people in that specific environment. Therefore, this work examines the effects of medical clothing on five different person re-identification algorithms. As artificial intelligence and computer vision find more and more applications in the medical field, the question arises to what extent current implementations of person re-identification algorithms are able to generalize from non-medical data, so that the algorithms can be applied in a medical scenario. Since person re-identification is a well-studied topic in the computer vision community, and can also be used in medical settings, this work focuses on the impact of medical clothing on such algorithms. This becomes relevant because the medical clothing is highly uniform and covers many features of a person's characteristics. In addition to examining the effects of clothing as described, ways to overcome the resulting limitations are discussed. In the absence of medical datasets for person re-identification, a suitable dataset was generated containing images of people in medical clothing and the required annotations. Five different existing re-identification models were trained on a non-medical dataset and then tested with the medical data created for this study. The results show a general drop in performance when subjects are wearing medical clothing instead of normal cloths. By additionally marking all people with individual colored hairnets, the re-identification performance can be improved compared to the unmarked subjects.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have