A deep learning approach is proposed for performing tissue-type classification of tomographic microwave and ultrasound property images of the breast. The approach is based on a convolutional neural network (CNN) utilizing the U-net architecture that also quantifies the uncertainty in the classification of each pixel. Quantitative tomographic reconstructions of dielectric properties (complex-valued permittivity), ultrasonic properties (compressibility and attenuation), as well as their combination, with the corresponding actual tissue-type classification constitute the training set. The CNN learns to map the quantitative property reconstructions to a single tissue-type image. The level of confidence in predicting a tissue-type at each pixel is determined. This uncertainty quantification is diagnostically critical for biomedical applications, especially when attempting to distinguish between cancerous and healthy tissues. The Gauss-Newton Inversion algorithm is used for the quantitative reconstruction of both dielectric and ultrasonic properties. Electromagnetic and ultrasound scattered-field data is obtained from MRI-derived numerical breast phantoms. Several numerical breast phantoms types, from fatty to dense, are considered. The proposed classification and uncertainty quantification approach is shown to outperform a previously studied tissue-type classification method based on a Bayesian approach.