In recent years, the need for regulation of robots and Artificial Intelligence, together with the urgency of reshaping the civil liability framework, has become apparent in Europe. Although the matter of civil liability has been the subject of many studies and resolutions, multiple attempts to harmonize EU tort law have been unsuccessful so far, and only the liability of producers for defective products has been harmonized so far. In 2021, by publishing the AI Act proposal, the European Commission reached the goal to regulate AI at the European level, classifying smart robots as ”high-risk systems”. This new piece of legislation, albeit tackling important issues, does not focus on liability rules. However, regulating the responsibility of developers and manufacturers of robots and AI systems, in order to avoid a fragmented legal framework across the EU and an uneven application of liability rules in each Member State, is still an important issue that raises many concerns in the industry sector. In particular, deep learning techniques need to be carefully regulated, as they challenge the traditional liability paradigm: it is often not possible to know the reason behind the output given by those models, and neither the programmer nor the manufacturer is able to predict the AI behavior. For this reason, some authors have argued that we need to take liability away from producers and programmers when robots are capable of acting autonomously from their original design, while others have proposed a strict liability regime. This article explores liability issues about AI and robots with regards to users, producers, and programmers, especially when the use of machine learning techniques is involved, and suggests some regulatory solutions for European lawmakers.