Artificial intelligence (AI) has entered libraries in various ways and raised concern about its potential ethical consequences therein. A number of approaches have been developed to encourage ethical AI and audit the ethics of specific AI applications, but very few approaches have been applied or tested, especially in a library setting, and so it remains unclear which, if any approaches are suitable or useful for encouraging ethical AI in libraries. We applied Ethical Foresight Analysis as an approach to identify possible ethical risks of an AI project for (semi-)automated subject indexing in a large research library. Specifically, to identify risks we conducted a two-round ethical Delphi study wherein experts on AI development, library practices, and AI ethics sought consensus on potential risks and their relative importance. The experts' post-test reflections on the procedure were then collected to inform an evaluation of the approach's feasibility. A variety of ethical risks of the specific project and of general AI indexing were indeed identified, most notably discrimination and under-representation stemming from attributes of the bibliographic training data provided by the library (e.g. varied historical contexts and gaps left by unindexed items). However, we identified some drawbacks of the approach tested: (1) it is time-consuming, which is likely prohibitive for many libraries, and (2) the identified risks were mainly well-known issues of AI and its training data rather than the subtle, application-specific, and human-centred issues that ethical foresight analysis might be employed to identify. Thus, although libraries should continue to model ethical AI through careful planning and auditing, alternative development and auditing approaches may be more practical to undertake and more effective at identifying novel or application-specific issues.