Once upon a time validation of robotic research was relatively straightforward. Let us assume, for example, that a researcher had published in a journal a novel adaptive control law with a numerical example on a two-link robot. Beyond the formal proof of convergence, he supplied to the reader the differential equations used to model the system, including the corresponding dynamic parameters (nomore than 20 numbers), the eventual quantization and discretization of the controller, the solver details of the software used, and the sensor noise statistics. Not only the reviewers, thus, but also each single reader would have the possibility to re-run the numerical simulations in a half-day of work. The communitywould have the possibility to test, validate, generalize, and benchmark the algorithm. Since then, robotics has changed, the machines are nowmuch more complex in their kinematics, number of degrees of freedom, and are filled with several sensors. Also, giant steps have been made: the robots left the confined industrial cells to jump within unstructured environments, not only in the industry but also in the houses, the museums, the airports, and the post-disaster sites; they perform a number of exciting tasks such as exploration, maintenance, interaction with humans, search and rescue . . .wait, is it really so? Beyond specific outstanding experiences, beyond the claims of the constructors and lab’s directors, how many robots run, autonomously or semi-autonomously, in our daily lives? Not so many, to be honest. A few vacuum cleaning robots, this is all (Guizzo, 2015). While we have several noticeable robotic tools (parking assist systems, lane keeping assist systems, space systems), where are all the learning and adaptable robot protagonists of thousands of scientific publications in the last years? Our robots can avoid the predicted unpredicted events, but what about the unpredicted? The information required to validate the two-link example above is obviously not possible any more but why are we experiencing so large a gap between claimed and real robotics? Why has it been the case for several years now that the robotics revolution is regularly postponed to the next 10 years . . .? The grand challenge for the robotics community is to discuss, from its foundations up, the way its research is conducted. It is a huge effort involving complex interactions among the institutions, the ministries, the funding agencies, and the individual researchers’ careers. Research is funded by selection of proposals, at each call more andmore imaginative which, however, most of the time end with more or less disappointing demos. This process includes perforce to review the validation of the research process in a wide sense and, within this, the publishing process. The latter is becoming (apparently) faster and more selective with new ideas spread out and absorbed by other researchers in a very short time during which a paper placed in the hands of a reviewer or a debatable reject may be a dramatic event. The previous claims are intentionally provocative, and so is the title of this article: are we (still) applying the scientific method in robotics? Let us frankly discuss this question. The Oxford English Dictionary defines the scientific method as “a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.” The strict link existingwith theory and experimentation is evident and further elaborated by, for instance, Karl Popper, who claimed that the criterion of the scientific status of a theory is its falsifiability,