Medical Object Detection (MOD) is a clinically relevant image processing method that locates structures of interest in radiological image data at object-level using bounding boxes. High-performing MOD models necessitate large datasets accurately reflecting the feature distribution of the corresponding problem domain. However, strict privacy regulations protecting patient data often hinder data consolidation, negatively affecting the performance and generalization of MOD models. Federated Learning (FL) offers a solution by enabling model training while the data remain at its original source institution. While existing FL solutions for medical image classification and segmentation demonstrate promising performance, FL for MOD remains largely unexplored. Motivated by this lack of technical solutions, we present an open-source, self-configuring and task-agnostic federated MOD framework. It integrates the FL framework Flower with nnDetection, a state-of-the-art MOD framework and provides several FL aggregation strategies. Furthermore, we evaluate model performance by creating simulated Independent Identically Distributed (IID) and non-IID scenarios, utilizing the publicly available datasets. Additionally, a detailed analysis of the distributions and characteristics of these datasets offers insights into how they can impact performance. Our framework’s implementation demonstrates the feasibility of federated self-configuring MOD in non-IID scenarios and facilitates the development of MOD models trained on large distributed databases.
Read full abstract