While speech analysis holds promise for mental health assessment, research often focuses on single symptoms, despite symptom co-occurrences and interactions. In addition, predictive models in mental health do not properly assess speech-based systems' limitations, such as uncertainty, or fairness for a safe clinical deployment. We investigated the predictive potential of mobile-collected speech data for detecting and estimating depression, anxiety, fatigue, and insomnia, focusing on other factors than mere accuracy, in the general population. We included n=865 healthy adults and recorded their answers regarding their perceived mental and sleep states. We asked how they felt and if they had slept well lately. Clinically validated questionnaires measuring depression, anxiety, insomnia, and fatigue severity were also used. We developed a novel speech and machine learning pipeline involving voice activity detection, feature extraction, and model training. We automatically analyzed participants' speech with a fully ML automatic pipeline to capture speech variability. Then, we modelled speech with pretrained deep learning models that were pre-trained on a large open free database and we selected the best one on the validation set. Based on the best speech modelling approach, we evaluated clinical threshold detection, individual score prediction, model uncertainty estimation, and performance fairness across demographics (age, sex, education). We employed a train-validation-test split for all evaluations: to develop our models, select the best ones and assess the generalizability of held-out data. The best model was WhisperM with a max pooling, and oversampling method. Our methods achieved good detection performance for all symptoms, depression (PHQ-9 AUC= 0.76F1=0.49, BDI AUC=0.78, F1=0,65), anxiety (GAD-7 F1=0.50, AUC=0.77) insomnia (AIS AUC=0.73, F1=0.62), and fatigue (MFI Total Score F1=0.88, AUC=0.68). These strengths were maintained for depression detection with BDI and Fatigue for abstention rates for uncertain cases (Risk-Coverage AUCs < 0.4). Individual symptom scores were predicted with good accuracy (Correlations were all significant, with Pearson strengths between 0.31 and 0.49). Fairness analysis revealed that models were consistent for sex (average Disparity Ratio (DR) = 0.86), to a lesser extent for education level (average Disparity Ratio (DR) = 0.47) and worse for age groups (average Disparity Ratio (DR) = 0.33). This study demonstrates the potential of speech-based systems for multifaceted mental health assessment in the general population, not only for detecting clinical thresholds but also for estimating their severity. Addressing fairness and incorporating uncertainty estimation with selective classification are key contributions that can enhance the clinical utility and responsible implementation of such systems. This approach offers promise for more accurate and nuanced mental health assessments, benefiting both patients and clinicians.