ABSTRACT AI systems to be used in migration, asylum and border control management are qualified as high-risk AI under the current draft AI Act in the EU. The draft Act introduces strict requirements for the development and use of these systems, including the requirement of high-quality data to be used for training algorithms in order to mitigate the risks to fundamental rights and safety. Based on research conducted in the framework of the H2020 CRiTERIA project, 1 this study analyses from a legal doctrinal approach the compliance of open-source data from Social Media platforms with the requirement of high-quality data and the present challenges for the transparency requirement. Following all the requirements introduced for mitigating the risks posed to fundamental rights and safety from high-risk AI, the compliance of open-source data from Social Media with the high-quality data requirement is put into doubt. As transparency is found to be the defining line between high-risk and unacceptable-risk AI, it is argued that the use of open data from Social Media for risk assessment border control AI systems might present an unacceptable risk for the protection of fundamental rights, if proper safeguards are not followed.