Abstract
We present the first major release of the Classification Algorithms Comparison Pipeline (CACP). The proposed software enables one to compare newly developed classification algorithms in Python with other classifiers to evaluate classification performance and ensure both outcomes’ reproducibility and statistical reliability. CACP simplifies and accelerates the entire classifier evaluation process considerably and helps prepare the professional documentation of the experiments conducted. The upgrade introduces enhancements to existing tools and adds new features: (1) - support for River machine learning library datasets in incremental learning, (2) - capability to include user-defined datasets, (3) - use of River classifiers for incremental learning, (4) - use of River metrics for incremental learning, (5) - flexibility to create user-defined metrics, (6) - record-by-record testing for incremental learning, (7) - enhanced summary of incremental testing results with dynamic visualization of the learning process, (8) - Graphical User Interface (GUI).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.