Looking back over 25yrs of R&D in knowledge acquisition – particularly mediated by workshops and a journal founded by Brian Gaines – it is remarkable how topics and perspectives have changed. We started from the assumption that the knowledge acquisition bottleneck was due to the problems of experts expressing their expert knowledge. However, explicitly available knowledge, as, e.g. in handbooks, appeared often sufficient to construct an adequate ‘artificial problem solver’. Expert systems became knowledge systems. The bottleneck was rather experienced in modeling the domain knowledge and the reasoning control – the problem solving method (PSM) – for automated execution. By developing libraries of reusable PSMs, or implementing shells with built-in PSMs, this bottleneck could be eased. Specifications of domain knowledge lead to the development of ontologies: another source for reuse. Despite the fact that this brought sufficient technology for a mature engineering methodology, little use is made of it in current practice, and developing ‘intelligent’ software is not much different today from what it was at the beginning of the 1980s. A more impressive heritage of knowledge acquisition R&D is the introduction of technology for building ontologies. These found their way via knowledge management to the architecture of the Semantic Web. In research apparent solutions also bring new problems. Two of these problems are suggested by empirical cognitive science research. The first one is that knowledge as represented in currently available (top-level) ontologies are too simple, because high level concepts may come in design ‘patterns’: a view that has recently also been taken up in the knowledge acquisition community. The second problem is the fact that despite much empirical research in cognitive psychology, we have insufficient insight in how we acquire new conceptualisations from text. This is a serious bottleneck for an early dream in knowledge acquisition: automated knowledge acquisition.