In the field of machine learning, the selection of a loss function plays a pivotal role in determining the training dynamics and generalization capability of models. Traditional loss functions such as mean square error (MSE) and cross-entropy are often not equipped to handle noisy and anomalous data effectively, which can result in models that perform poorly in practical scenarios. To address these shortcomings, this study introduces a novel adaptive robust loss function, enhanced by a tunable parameter , which allows for flexibility in adjusting the robustness of the function according to the nature of the data being processed. Our research demonstrates that this new loss function significantly improves the performance of linear regression and multilayer perceptron models, particularly in environments laden with noisy and anomalous data. By adapting the parameter , the function can cater to varying levels of data irregularities, thus enhancing the models accuracy and reliability across diverse and complex data environments. This adaptive mechanism not only offers a substantial theoretical contribution to the understanding of robust loss functions but also provides a practical tool for machine learning practitioners to develop models that are resilient to data imperfections. The implications of this research are profound, suggesting a shift towards more adaptive and robust approaches in machine learning model development.