Monitoring vital signs is essential for tracking patient health and detecting changes in their condition. However, in aging cultures with overburdened healthcare staff, accurately and efficiently monitoring vital signs poses a challenge. To address this issue, an autonomous system for vital sign control is proposed, offering improved accuracy, real-time monitoring, alert systems, remote monitoring, and reduced staff labor costs. This paper presents a deep learning architecture using a publicly accessible dataset of 25,494 patients and five numerical characteristics to classify vital signs. A CNN-LSTM model is introduced, outperforming a traditional CNN model in terms of performance, parameter efficiency, and training time. The CNN-LSTM model effectively captures both spatial and temporal features from the input data, resulting in superior representation and improved accuracy compared to the CNN model, which only extracts spatial data. The suggested model achieved a remarkable accuracy of 98%, surpassing previous models. The findings demonstrate the potential of the CNN-LSTM model for early identification of medical issues, enabling prompt actions and enhanced patient outcomes. Overall, this research highlights the significance of implementing an autonomous system for vital sign control in healthcare organizations, offering substantial benefits in patient care and healthcare management.