Abstract
The quality of the air we breathe during the courses of our daily lives has a significant impact on our health and well-being as individuals. Unfortunately, personal air quality measurement remains challenging. In this study, we investigate the use of first-person photos for the prediction of air quality. The main idea is to harness the power of a generalized stacking approach and the importance of haze features extracted from first-person images to create an efficient new stacking model called AirStackNet for air pollution prediction. AirStackNet consists of two layers and four regression models, where the first layer generates meta-data from Light Gradient Boosting Machine (LightGBM), Extreme Gradient Boosting Regression (XGBoost) and CatBoost Regression (CatBoost), whereas the second layer computes the final prediction from the meta-data of the first layer using Extra Tree Regression (ET). The performance of the proposed AirStackNet model is validated using public Personal Air Quality Dataset (PAQD). Our experiments are evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Coefficient of Determination (R2), Mean Squared Error (MSE), Root Mean Squared Logarithmic Error (RMSLE), and Mean Absolute Percentage Error (MAPE). Experimental Results indicate that the proposed AirStackNet model not only can effectively improve air pollution prediction performance by overcoming the Bias-Variance tradeoff, but also outperforms baseline and state of the art models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.