This paper deals with an in-depth performance analysis on the estimation of Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) by using features from the photoplethysmography (PPG) signal enhanced using the Maximal Overlap Discrete Wavelet Transform (MODWT), to train many machine learning (ML) regression models, including eXtreme Gradient Boost (XGBoost). The impact of different features selections methods, ML methods and training set sizes has been analyzed. One result is on the achievable improvements using features extracted from MODWT enhanced PPG signals. The most significant features have been selected using three different algorithms, namely RReliefF, Minimum Redundancy Maximum Relevance (MRMR) and Correlation-based Feature Selection (CFS). This comparison has been critical to underline that the exploitation of the new features allows to improve SBP and DBP estimation. Moreover, the authors have trained several ML algorithms to provide a comparison of their accuracy and training time, showing the Pareto frontier. RReliefF and MRMR selections algorithms, and several ML algorithms such as XGBoost, Gaussian Process Regression(GPR) and Ensemble stood out for their performance, with a different compromise between prediction error and training time. In addition, a further result has been obtained by varying the dimension of the dataset to understand the impact on Root Mean Square Error (RMSE) for models that have shown better performance, giving an empirical relationship on achievable RMSE as a function of training set size. From that relationship it has been extrapolated an upper boundary of the set size over which no further RMSE improvements are expected.