State-of-the-art convolutional neural networks are designed to identify numerous object classes. Inference using such complex networks is fairly resource intensive, prohibiting their deployment on resource-constrained edge devices. In this context, we make two observations: First, the ability to classify an exhaustive list of categories is excessive for the demands of most IoT applications. Furthermore, designing a new custom-designed CNN for each new IoT application is inefficient. The observations motivate us to consider if one can utilize an existing optimized CNN model to automatically construct a competitive CNN for a given IoT application whose objects of interest are a fraction of categories that the original CNN was designed to classify, such that the model’s inference resource requirement is proportionally scaled down. We use the term <i>resource scalability</i> to refer to this concept, and develop a methodology for automated synthesis of resource scalable CNNs from an optimized baseline CNN. The synthesized CNN has sufficient learning capacity for handling the given IoT application requirements, and yields competitive accuracy. The proposed approach is fast, and unlike the presently common practice of neural network design, does not require iterative rounds of training trial and error to find an optimal architecture. Experimental results showcase the efficacy of the approach, and highlight its complementary nature with respect to existing model compression techniques, such as pruning and quantization.