Submit an article entitled thus to your publishing editor and they will raise the query: Give the eponymous sum to a schoolchild and, provided that he/she has learned to add (or has learned to use a calculator), he or she will also tell you the answer is 105721. Alan Turing was unabashed, in a 1950 article1 in Mind, to put the incorrect answer into the mouth of a fictional player of the ‘imitation game’. The article was entitled ‘Computing, machinery and intelligence’. In it Turing asked ‘Can a machine think?’ The game came to be known as the Turing test, through which a congruence of machine and human intelligence could be established. Turing's suggested route to intelligent machines was to mimic the machinery of a child's brain, and educate the resulting machine. That suggestion continues to inspire researchers – and robots.2 And now there's a new kid on the block! WolframAlpha is described – carefully, without mentioning intelligence – as a ‘computational knowledge engine’. Its goal is to3 build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries. WolframAlpha is said to contain more than 10 trillion pieces of data, more than 50,000 algorithms and models, and linguistic capabilities from more than 1000 domains. It's great fun to play with. One commentator has suggested that WolframAlpha is possibly an ‘emerging artificial intelligence and a step towards a self-organising internet’. That's most unlikely, at least if we use the Turing test as the criterion for judging the emergent artificial intelligence. But it's only a failure if the aim is that AI should compete with HI – human intelligence. It appears very much the mission of this journal that the aim is to complement and complete HI. It is with great pleasure that we welcome to the board two new members: Professors Adrian Hopgood (De Montfort University, UK) and John Fox (University of Oxford and University College London, UK). Professor Adrian Hopgood is Dean of the Faculty of Technology at De Monfort University, Leicester, UK. His research interest concerns artificial intelligence, including knowledge-based and distributed multi-agent systems, computational intelligence (artificial neural networks, genetic algorithms, fuzzy logic) and their practical applications. He has a particular passion for hybrids, which bring together the best techniques for particular applications. As well as a PhD, Adrian has a Diploma in French and an MBA from the Open University. He is also author of the best selling Intelligent Systems for Engineers and Scientists. Professor John Fox attended Durham (UK) and Cambridge Universities (UK) and held postdoctoral fellowships at Carnegie-Mellon and Cornell Universities in the USA. After returning to the UK in 1975 he worked on decision making and artificial intelligence in medicine, joining Cancer Research UK (then ICRF) in 1981 to set up an interdisciplinary group in artificial intelligence, computer science and medicine. John's interests include cognitive science, computing and biomedical engineering. A recent book Safe and Sound: Artificial Intelligence in Hazardous Applications deals with the use of artificial intelligence in medicine and other safety-critical fields. He was founding editor of the Knowledge Engineering Review. John is now at Oxford where he has set up a new collaboration in cognitive science and systems engineering (http://www.cossac.org). Please welcome both Adrian and John to the board. This month we have four excellent papers. In ‘Fuzzy based fast dynamic programming solution of unit commitment with ramp constraints’, Patra et al. give insight into the use of fuzzy logic in the solution to difficult-to-solve multi-stage decision-making problems, showing that fuzzy models can perform as well on these problems as more traditional techniques. The difficulty of obtaining realistic simulated real-world data suitable for the validation of models used in knowledge engineering of medical solutions is well known. In ‘Preliminary evaluation of electroencephalographic entrainment using thalamocortical modelling’, Cvetkovic et al. consider a number of theoretical models for their ability to replace actual data. Importantly, the authors identify a number of shortcomings that should be overcome for advances in this area of knowledge engineering to be made. In ‘Modified mixture of experts employing eigenvector methods and Lyapunov exponents for analysis of electroencephalogram signals’, Übeyli presents another in her excellent series of papers dedicated to detecting variability of electroencephalogram signals, this time with a classification accuracy of 98.33%. Continuing the theme of the second paper, if not the approach, Yang and Kecman consider the use of an adaptive local hyperplane algorithm as a suitable classifier when only a small data set is available to learn from. The proposed classifier outperforms, on average, all the other four benchmarking classifiers in this area. Enjoy!