This work discusses the challenge of developing self-cognisant artificial intelligence systems, looking at the possible benefits and the main issues in this quest. It is argued that the degree of complexity, variation, and specialisation of technological artefacts used nowadays, along with their sheer number, represent an issue that can and should be addressed through an important step towards greater autonomy, that is, the integration of learning, which will allow the artefact to observe its own functionality and build a model of itself. This model can be used to adjust the expectations from an imperfectly manufactured item, patch up its performance and control its consistency over time, so providing a form of self-certification and a warning mechanism in case of deterioration. It is suggested that these goals cannot be fully achieved without the ability of the learner to model its own performance, and the implications and issues of this self-reflective learning are debated. A possible way of quantifying the faculty for self-cognition is proposed, and relevant areas of computer science, philosophy and the study of the evolution of language are mentioned.