An introduction to optimal control, a fundamental concept in engineering and science disciplines, is a process of finding ways of controlling dynamic systems in such a way that they achieve certain goals while being subjected to given state limitations. The conventional approaches developed in the past have been integral to solving optimal control problems, including the maximum principle belonging to Pontryagin and the dynamic programming method. While relatively straightforward, these methods are not always amenable to higher-dimensional scenarios, interacting forces, or other non-trivial constraints. This paper presents a new methodology to extend classical optimal control, considering both direct and indirect optimization techniques. The direct methods, Euler and Runge-Kutta, Trapezoidal, and Hermite-Simpson, do not require the explicit derivation of the analytical control laws to execute control trajectories. Semi-analytical techniques, such as the shooting method, are based the control laws on proposed adjoint differential equations. This paper presents the basic outline of the classical optimal control problem and discusses the direct and indirect optimization methods. It is illustrated by referencing various examples, like the fixed-rate royalty payment approach. After outlining each framework, we identify the positive and negative aspects of the approach in questions of consistency and performance. Finally, the problems on the synergy of direct and indirect methods are considered further, and areas of further development of the presented methods and their integration with the more sophisticated tools, such as machine learning are identified. It can be concluded that the approach of using both direct and indirect optimization methods presents great potential in modernizing the classical optimal control, which challenges the conventional techniques and indicates potential for further development of the control system optimization.