Artificial Intelligence (AI) regulatory and other governance mechanisms have only started to emerge and consolidate. Therefore, AI regulation, legislation, frameworks, and guidelines are presently fragmented, isolated, or co-exist in an opaque space between national governments, international bodies, corporations, practitioners, think-tanks, and civil society organisations. This article proposes a research design set up to address this problem by directly collaborating with targeted actors to identify principles for AI that are trustworthy, accountable, safe, fair, non-discriminatory, and which puts human rights and the social good at the centre of its approach. It proposes 21 interlinked substudies, focusing on the ethical judgements, empirical statements, and practical guidelines, which manufacture ethicopolitical visions and AI policies across four domains: seven tech corporations, seven governments, seven civil society actors, together with the analysis of online public debates. The proposed research design uses multiple research techniques: extensive mapping and studies of AI ethics policy documents and 120 interviews of key individuals, as well as assorted analyses of public feedback discussion loops on AI, employing digital methods on online communities specialising in AI debates. It considers novel conceptual interactions communicated across the globe, expands the regulatory, ethics, and technological foresight, both at the individual level (autonomy, identity, dignity, privacy, and data protection) and the societal level (fairness/equality, responsibility, accountability and transparency, surveillance/datafication, democracy and trust, collective humanity and the common good). By producing an innovative, intercontinental, multidisciplinary research design for an Ethical AI Standard, this article offers a concrete plan to search for the Holy Grail of Artificial Intelligence: Its Ethics.