Abstract

Industries that sell products with short-term or seasonal life cycles must regularly introduce new products. Forecasting the demand for New Product Introduction (NPI) can be challenging due to the fluctuations of many factors such as trend, seasonality, or other external and unpredictable phenomena (e.g., COVID-19 pandemic). Traditionally, NPI is an expertcentric process. This paper presents a study on automating the forecast of NPI demands using statistical Machine Learning (namely, Gradient Boosting and XGBoost). We show how to overcome shortcomings of the traditional data preparation that underpins the manual process. Moreover, we illustrate the role of cross-validation techniques for the hyper-parameter tuning and the validation of the models. Finally, we provide empirical evidence that statistical Machine Learning can forecast NPI demand better than experts.

Highlights

  • In several industries (e.g., Fashion), the period in which the products are saleable is likely to be short and seasonal

  • Gradient Boosting [5] refers to a class of ensemble Machine Learning approaches (ML) algorithms that can be used for regression predictive modeling problems

  • It is worth noting that the validation Mean Average Percentage Error (MAPE) in the case of experts’ approach corresponding to each proposed approach had different values due to the way we generated the validation set for each approach

Read more

Summary

Introduction

In several industries (e.g., Fashion), the period in which the products are saleable is likely to be short and seasonal. Demand for these products is hardly stable or linear It may be influenced by the fluctuations of many factors like weather conditions, holidays, marketing strategy, fashion trends, films, or even by celebrities and footballers. These factors make it challenging to forecast the demands for New Product Introductions (NPI). Models are fit using any arbitrary differentiable loss function and gradient descent optimization algorithm This gives the technique its name, ”gradient boosting,” as the loss gradient is minimized as the model is fit, much like a Neural Network in Deep Learning. We must add a tree to the model that reduces the loss (i.e., follow the gradient) to perform the gradient descent procedure. The output for the new tree is added to the output of the existing sequence of trees to correct or improve the final output of the model

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call