Despite the widespread adoption of deep learning to enhance image classification, significant obstacles remain. First, multisource data with diverse sizes and formats is a great challenge for most current deep learning models. Second, lacking manual labeled data for model training limits the application of deep learning. Third, the widely used CNN-based methods shows their limitations in extracting global features and yield poor performance for image topology. To address these issues, we propose a Hybrid Feature Fusion Deep Learning (HFFDL) framework for image classification. This framework consists of an automated image segmentation module, a two-stream backbone module, and a classification module. The automatic image segmentation module utilizes the U-Net model and transfer learning to detect region of interest (ROI) in multisource images; the two-stream backbone module integrates the Swin Transformer architecture with the Inception CNN, with the aim of simultaneous extracting local and global features for efficient representation learning. We evaluate the performance of HFFDL framework with two publicly available image datasets: one for identifying COVID-19 through X-ray scans of the chest (30,386 images), and another for multiclass skin cancer screening using dermoscopy images (25,331 images). The HFFDL framework exhibited greater performance in comparison to many cutting-edge models, achieving the AUC score 0.9835 and 0.8789, respectively. Furthermore, a practical application study conducted in a hospital, identifying viable embryos using medical images, revealed the HFFDL framework outperformed embryologists.