Abstract

Abstract. In a science-based site selection procedure (StandAV), the Federal Republic of Germany is searching for a site with the best possible safety for a repository for high-level waste (HLW) over a period of 1 million years. This is a challenging task, given the size of Germany, its geological variability and the verification period required to ensure the best possible long-term safety. In addition, an enormous amount of heterogeneous geodata has to be processed. The application of artificial-intelligence-based methods in geosciences has great potential for dealing with large heterogeneous data sets. In the German Federal Office for the Safety of Nuclear Waste Management (BASE)-funded project “Application of artificial intelligence (AI) in the site selection process for a deep geological repository (FKZ 4721E03210)”, an interdisciplinary assessment tool is being developed to evaluate the applicability of AI methods in the geosciences. This contribution focuses on the potential and challenges of applying AI in geosciences with respect to key geological activities in the StandAV. Limitations that may arise from the use of AI with respect to key activities in the StandAV are highlighted. The necessary conditions for its future applicability are proposed. The results show that AI methods offer clear advantages over conventional methods for data management and handling large geological datasets and for modelling complex long-term and coupled geological processes. However, AI methods are generally only transferable to the geoscientific questions of the StandAV with methodological and subject-specific adaptations. Furthermore, the use of AI requires sufficient data, both in terms of quality and quantity. The study also shows that AI should only be used in a supportive way to address geological issues in key activities and should not have any decision-making power when used in StandAV. For example, AI can carry the risk of data and developer bias, which in turn can have serious consequences for the correct interpretation of results. High demands must be placed on the traceability of the AI methods used. AI methods that do not meet the transparency requirements of the StandAV carry a significant risk of jeopardising public confidence in the participation process. This could increase the general mistrust and scepticism towards AI in the public perception (Krob et al., 2023). It is strongly recommended that all methods should be evaluated and validated iteratively and that the results should be made available to the public when applied to the key activities of the StandAV.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call