Steerable cameras that can be controlled via a network, to retrieve telemetries of interest have become popular. In this paper, we develop a framework called AcTrak , to automate a camera’s motion to appropriately switch between (a) zoom ins on existing targets in a scene to track their activities, and (b) zoom out to search for new targets arriving to the area of interest. Specifically, we seek to achieve a good trade-off between the two tasks, i.e., we want to ensure that new targets are observed by the camera before they leave the scene, while also zooming in on existing targets frequently enough to monitor their activities. There exist prior control algorithms for steering cameras to optimize certain objectives; however, to the best of our knowledge, none have considered this problem, and do not perform well when target activity tracking is required. AcTrak automatically controls the camera’s PTZ configurations using reinforcement learning (RL ), to select the best camera position given the current state. Via simulations using real datasets, we show that AcTrak detects newly arriving targets 30% faster than a non-adaptive baseline and rarely misses targets, unlike the baseline which can miss up to 5% of the targets. We also implement AcTrak to control a real camera and demonstrate that in comparison with the baseline, it acquires about 2× more high resolution images of targets.