X-ray scatter causes considerable degradation in the cone-beam computed tomography (CBCT) image quality. To estimate the scatter, deep learning-based methods have been demonstrated to be effective. Modern CBCT systems can scan a wide range of field-of-measurement (FOM) sizes. Variations in the size of FOM can cause a major shift in the scatter-to-primary ratio in CBCT. However, the scatter estimation performance of deep learning networks has not been extensively evaluated under varying FOMs. Therefore, we train the state-of-the-art scatter estimation neural networks for varying FOMs and develop a method to utilize FOM size information to improve performance. We used FOM size information as additional features by converting it into two channels and then concatenating it to the encoder of the networks. We compared our approach for a U-Net, Spline-Net, and DSE-Net, by training them with and without the FOM information. We utilized a Monte Carlo-simulated dataset to train the networks on 18 FOM sizes and test on 30 unseen FOM sizes. In addition, we evaluated the models on the water phantoms and real clinical CBCT scans. The simulation study demonstrates that our method reduced average mean-absolute-percentage-error for U-Net by 38%, Spline-Net by 40%, and DSE-net by 33% for the scatter estimation in the 2D projection domain. Furthermore, the root-mean-square error on the 3D reconstructed volumes was improved for U-Net by 43%, Spline-Net by 30%, and DSE-Net by 23%. Furthermore, our method improved contrast and image quality on real datasets such as water phantom and clinical data. Providing additional information about FOM size improves the robustness of the neural networks for scatter estimation. Our approach is not limited to utilizing only FOM size information; more variables such as tube voltage, scanning geometry, and patient size can be added to improve the robustness of a single network.