Neural networks are a form of machine learning that can be trained to estimate relationships between variables in complex physical processes. They are particularly adept at estimating relationships between variables that lie within the ranges of values for which they have been trained. Their performance often diminishes when tasked with generating estimates of physical processes that lie outside of the region of the input space covered by the training set. In the case of physical systems, the possible relationships between input and output variables are limited. Dimensional variables can be replaced by a smaller number of dimensionless parameters that enforce physical limitations between dimensional input and output variables. This can be accomplished using dimensional analysis and the Buckingham Pi theorem to enforce or test for dynamic similitude between systems operating at different scales. This process can be exploited for two purposes. The first is to reduce the number of variables correlated by a neural network. The second is to allow a dimensionless neural network to be trained to function as an interpolator between dimensionless input and output parameters even though a neural network trained using dimensional data would be required to extrapolate if the dimensional input variables lie outside of the training set. Using dimensionless data to fit an input-output relationship generalizes better, as compared to using dimensional data, when dynamic similitude between systems has been achieved. Examples are presented that demonstrate that the proposed process can enable accurate modeling of the behavior of physically similar systems operating at different scales.