Minimum-lap-time optimal control problems (MLT-OCPs) are a popular tool to assess the best lap time of a vehicle on a racetrack. However, MLT-OCPs with high-fidelity dynamic vehicle models are computationally expensive, which limits them to offline use. When using autonomous agents in place of an MLT-OCP for online trajectory planning and control, the question arises of how far the resulting manoeuvre is from the maximum performance. In this paper, we improve a recently proposed artificial race driver (ARD) for online trajectory planning and control, and we compare it with a benchmark MLT-OCP. The novel challenge of our study is that ARD controls the same high-fidelity vehicle model used by the benchmark MLT-OCP, which enables a direct comparison of ARD and MLT-OCP. Leveraging its physics-driven structure and a new formulation of the g-g-v performance constraint, ARD achieves lap times comparable to the offline MLT-OCP (few milliseconds difference). We analyse the different trajectories resulting from the ARD and MLT-OCP solutions, to understand how ARD minimises the effect of local execution errors in search of the minimum-lap-time. Finally, we show that ARD consistently maintains its performance when tested on unseen circuits, even with unmodelled changes in the vehicle's mass.