This study presents a framework for classifying a wooden mannequin’s poses using a single-photon avalanche diode (SPAD) array in dynamic and heterogeneous fog conditions. The target and fog generator are situated within an enclosed fog chamber. Training datasets are continuously collected by configuring the temporal and spatial resolutions on the sensor's firmware, utilizing a low-cost SPAD array sensor priced below $5, consisting of an embedded SPAD array and diffused VCSEL laser. An extreme learning machine (ELM) is trained for rapid pose classification, as a benchmark against CNN. We quantitatively justify the selection of nodes in the hidden layer to balance the computing speed and accuracy. Results demonstrate that ELM can accurately classify mannequin poses when obscured by dynamic heavy fog to 35 cm away from the sensor, enabling real-time applications in consumer electronics. The proposed ELM achieves 90.65% and 89.58% accuracy in training and testing, respectively. Additionally, we demonstrate the robustness of both ELM and CNN as the fog density increases. Our study also discusses the sensor’s current optical limitations and lays the groundwork for future advancements in sensor technology.