The aim of production planning and control is to ensure the achievement of the logistic targets of high due date reliability, low lead times, high capacity utilization, and low WIP levels, while maintaining productivity and quality targets. If order due dates are missed, a common intuitive reaction of production planners is to adjust planned lead times. How often and to what extent updates are reasonable has previously been unclear because, while trying to improve the logistic target achievement, planned lead time adjustments may actually cause an opposite effect, which is known as the Lead Time Syndrome (LTS) of Manufacturing Control [1]. Previous research on the LTS interactions has shown that the line of argumentation of the LTS is valid [2]. Knollmann et al. showed by means of mathematical modeling, control-theoretic simulation and case study research that planned lead time adjustments lead to a short-term increase in lead time variation, thus to an increase in lateness variation and to a decrease in due date reliability [2–5]. The authors suggest to choose update frequency depending on the ratio of latency period and the update frequency (the period between two consecutive adjustments) as the misbalance of these two parameters turns out to be the main trigger of the LTS. Selçuk investigated the LTS by means of queuing theory in an independent approach [6–8]. The authors concluded that planned lead time adjustments lead to an increase in process variability, thus to high WIP levels and long lead times. However, they suggest to reduce update frequency, to decrease process variability and thus to avoid LTS. This conclusion is not in line with the conclusions drawn from the research presented by Knollmann et al.. Therefore, this paper compares the different research approaches methodologies and discusses how the different research methodologies impact the conclusions drawn for practice application. This comparison provides further insights into LTS research and indicates further research fields.