One of us (King) joined Monash University as a young lecturer in econometrics in 1979. At about that time, there was a noticeable leap in the availability of computer power that allowed us to contemplate a likelihood based approach to econometrics. For example, the Faculty of Economics and Politics had taken delivery of its own VAX11/780 mini computer which allowed serious statistical and econometric computations to be made in double precision if required. Econometricians no longer had their CPU time rationed. We finally had the computational power and ability to run maximum likelihood estimation for most models and consequently conduct likelihood ratio (LR) tests or if more convenient, Lagrange multiplier (LM) or Wald tests. Did these advances mean that the theory of econometrics was “sorted”? It didn’t take long for cracks to appear. One testing problem that generated a disproportionately large literature in econometrics is that of testing for first-order autoregressive disturbances in the linear regression model, see King [5] for a survey. Here the LR test was found to be a very poor performer [1]. Unit root testing is another area where the classical tests (LR, LM and Wald) were found to perform poorly. In general, problems were caused by the small sample sizes and difficulties in dealing with nuisance parameters that can cause biases in estimated parameters that can also affect the performance of tests and forecasts. The last 35 years has seen all sorts of challenges overcome in an unending search for improvement in model based estimation, testing and forecasting. Many of these developments have become feasible because of the advances in computer hardware and software during that time. Also the ready availability of simulation methods have helped assess the small sample performance of proposed methods, both absolutely and comparatively. This special issue makes a contribution to this literature with the aim of bringing some of these advances to the notice of statisticians and others interested in model based statistical inference. The main ideas behind point optimal testing plus a strategy for the general application of such tests were outlined by King [6]. This issue opens with a paper that surveys the post 1987 literature on point optimal testing with particular emphasis on dealing with nuisance parameters. That period has witnessed the development of asymptotic point optimal testing led by the pioneering work of Elliott, Rothenberg and Stock [2] and, more recently, Muller [10] with a range of applications involving unit root testing, testing for cointegration and breaks or variation in regression coefficients. The review closes with an outline of a new class of point optimal tests for multi-dimensional testing, an area with few applications to date. Jahar Bhowmik contributes a paper that demonstrates the t-test of a classical regression coefficient is uniformly most powerful for a wider class of invariant tests. It is widely known that the F-test is uniformlymost powerfulwithin the class of tests that are invariant to four separate sets of transformations of the data. Because the t-test statistic is the square root of the classical F-test statistic, research papers and standard texts (see for example [3,8,13]) typically state that the t-test is uniformly most powerful for the same four sets of transformations as the F-test. Bhowmik