This paper addresses how physical knowledge can improve machine learning in process control. A data-driven tracking control framework using physics-informed neural networks (PINNs) and deep reinforcement learning (DRL) is proposed for dynamical systems, which is particularly important when iterations or repetitions of data collection experiments are limited or even not allowed. An indirect control approach is followed to accomplish this. First, a new process model is identified using PINNs trained from a model representing the experimental data collected from the plant. Then, the DRL-based control agent utilizes the identified PINN model to deliver an offline policy. We evaluate our framework for data-driven tracking control of nonisothermal CSTR, considering measurement noise and stochastic dynamics for closed-loop control. Our approach performed comparably to an NMPC without requiring a model to predict the process dynamics.