Deep Neural Networks (DNNs) are very computationally demanding, which presents a significant barrier to their deployment, especially on resource-constrained devices. Significant work from both the machine learning and computing systems communities has attempted to accelerate DNNs. However, the number of techniques available and the required domain knowledge for their exploration continues to grow, making design space exploration (DSE) increasingly difficult. To unify the perspectives from these two communities, this paper introduces the Deep Learning Acceleration Stack (DLAS), a conceptual model for DNN deployment and acceleration. We adopt a six-layer representation that organizes and illustrates the key areas for DNN acceleration, from machine learning to software and computer architecture. We argue that the DLAS model balances simplicity and expressiveness, assisting practitioners from various domains in tackling co-design acceleration challenges. We demonstrate the interdependence of the DLAS layers, and thus the need for co-design, through an across-stack perturbation study, using a modified tensor compiler to generate experiments for combinations of a few parameters across the DLAS layers. Our perturbation study assesses the impact on inference time and accuracy when varying DLAS parameters across two datasets, seven popular DNN architectures, four compression techniques, three algorithmic primitives (with sparse and dense variants), untuned and auto-scheduled code generation, and four hardware platforms. The study observes significant changes in the relative performance of design choices with the introduction of new DLAS parameters (e.g., the fastest algorithmic primitive varies with the level of quantization). Given the strong evidence for the need for co-design, and the high costs of DSE, DLAS offers a valuable conceptual model for better exploring advanced co-designed accelerated deep learning solutions.