Abstract Artificial intelligence (AI) has been increasingly performing legitimate tasks on behalf of humans. However, the legal aspects of this task performance are critical, given the unprecedented ability of AI to self-learn and act reasonably and independently. This article therefore identifies the critical regulatory aspects of AI that authorities and policymakers should be aware of, with regard to whether AI should be regulated now or at a later stage in its development. Nevertheless, the absence of a specific law, literally designated as an Artificial Intelligence Law, does not necessarily mean that AI is unregulated. This research therefore argues that AI is already indirectly regulated in several aspects. It is subsequently found that broad regulation could be premature, given that some types of AI systems are not yet specific in their form, or have yet to mature. On a global level, it is argued in this paper that it would be sensible to establish an international organization sooner rather than later, as AI grows in complexity and becomes increasingly transnational. Therefore, this research presents some of the possible obstacles this proposed organization may face. On a national level, it is suggested that the time has come to establish a regulatory infrastructure as a pre-regulatory phase, which would render future AI regulation more feasible. In particular, AI needs to be identified under a national digital identification scheme, as this would make it more regulatable and manageable. This is a necessary step to enable the authorities to confront the myriad legal issues and challenges in the longer term. While establishing a digital ID scheme for AI systems, discussions over the liability associated with AI’s wrongful acts would be especially productive.