The incorporation of machine learning, specifically deep learning, into speech enhancement algorithms represents an advanced methodology aimed at restoring original speech signals from distorted counterparts. This innovative approach incorporates the use of Charlier polynomials-based discrete transform, particularly the discrete Charlier transform (DCHT), to extract spectra from noisy signals employing a fully connected neural network. Leveraging the capabilities of deep learning, particularly in handling nonlinear mapping challenges, the system acquires contextual information from speech signals, resulting in enhanced speech characterized by improved quality and intelligibility. The proposed algorithm undergoes rigorous empirical testing through self-comparison, fine-tuning the DCHT parameter to optimize the performance of speech enhancement models. The experimentation entails the variation of DCHT parameter values, with evaluation conducted using the TIMIT database. Diverse speech measures are employed for comprehensive assessment, revealing the effectiveness of the DCHT-based trained model in enhancing speech signals within specific conditions.