PurposeThe study aims to identify the practical borders of AI legal personality and accountability in human-centric services.Design/methodology/approachUsing a framework tailored for AI studies, this research analyses structured interview data collected from auditors based in Poland.FindingsThe study identified new constructs to complement the taxonomy of arguments for AI legal personality: cognitive strain, consciousness, cyborg paradox, reasoning replicability, relativism, AI misuse, excessive human effort and substitution.Research limitations/implicationsThe insights presented herein are primarily derived from the perspectives of Polish auditors. There is a need for further exploration into the viewpoints of other key stakeholders, such as lawyers, judges and policymakers, across various global contexts.Practical implicationsThe findings of this study hold significant potential to guide the formulation of regulatory frameworks tailored to AI applications in human-centric services. The proposed sui generis AI personality institution offers a dynamic and adaptable alternative to conventional legal personality models.Social implicationsThe outcomes of this research contribute to the ongoing public discourse on AI’s societal impact. It encourages a balanced assessment of the potential advantages and challenges associated with granting legal personality to AI systems.Originality/valueThis paper advocates for establishing a sui generis AI personality institution alongside a joint accountability model. This dual framework addresses the current uncertainties surrounding human, general AI and super AI characteristics and facilitates the joint accountability of responsible AI entities and their ultimate beneficiaries.