This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.