This study explores the integration of quantum data embedding techniques into classical machine learning (ML) algorithms; to assess performance enhancements and computational implications across a spectrum of models. We explored various classical-to-quantum mapping methods; ranging from basis encoding and angle encoding to amplitude encoding; for encoding classical data. We conducted an extensive empirical study encompassing popular ML algorithms, including Logistic Regression, K-Nearest Neighbors, Support Vector Machines, and ensemble methods like Random Forest, LightGBM, AdaBoost, and CatBoost. Our findings reveal that quantum data embedding contributes to improved classification accuracy and F1 scores, particularly notable in models that inherently benefit from enhanced feature representation. We observed nuanced effects on running time, with low-complexity models exhibiting moderate increases and more computationally intensive models experiencing discernible changes. Notably, ensemble methods demonstrated a favorable balance between performance gains and computational overhead.This study underscores the potential of quantum data embedding to enhance classical ML models and emphasizes the importance of weighing performance improvements against computational costs. Future research may involve refining quantum encoding processes to optimize computational efficiency and explore scalability for real-world applications. Our work contributes to the growing body of knowledge on the intersection of quantum computing and classical machine learning, offering insights for researchers and practitioners seeking to harness the advantages of quantum-inspired techniques in practical scenarios.
Read full abstract