Artificial intelligence has been seeping into various fields of international law for some time, affecting fields such as international humanitarian law – especially regarding the legality of autonomous weapon systems, but also intellectual property law and the legal profession as a whole. A conflicting zone encompassing many subfields is human rights, where an already sensitive subject that is open to debates and interpretation is met with rough questions. For instance, should and could human rights norms be transferred into pre-programmed entities? What relevance can human rights have to a non-human being that has been created, programmed and assembled by humans? Vast regional differences exist between the European, African and Inter-American systems with a lack of coherent structure in the Asia-Pacific region. Our understanding of human rights has also developed substantially over the decades, especially regarding norms on slavery, free speech, the prohibition of discrimination and the rights of women, of disabled persons and indigenous peoples to name a few examples. Furthermore, a vast array of international documents on human rights are political manifestos utilising expressions such as “respecting” and “ensuring” human rights as obligations for members of the international community. Since these provisions deliberately leave a lot of room for interpretation, it seems almost an impossible task to translate them to “binary code”, to a format that is digestible for an artificial entity. The article aims to answer these questions by analysing the abovementioned line of thought and combining it with various attempts at international regulation by states, international organisations as well as non-governmental organisations and think-tanks. The fundamental focus of this paper is to ascertain whether human rights and AI can be made compatible under the current framework of international law at today’s level of development.