Abstract

Advances in healthcare social media and information about the doctor-patient (D–P) communication regarding the prior patients’ treatment experience, can positively influence the D–P relationship. In pace with prior patients’ photo-sharing on healthcare social media websites from personal computers and smartphones regarding their treatment experience, the amount of multi-modal content has been growing exponentially. Therefore, there is an increasing need for coping with such information to mine useful knowledge about the D–P communication. Scraping 68,610 reviews, including 4618 photos from a popular physician-rating site, Yelp.com , this study proposes a novel, real-time, multi-modal classification framework, which uses textual and visual modalities as a source of information. Furthermore, this work suggests a social media image filtering mechanism that filters duplicate and irrelevant information from the data. Results show that the data filtering enhances the information reliability, whereas the addition of novel text and visual feature sets improves the classification accuracy up to 16.94%. In addition, fusing textual and visual features enhance the performance of the classifier by 18.24%, which produces better results than considering them separately. The findings also revealed that deep learning algorithms outperformed the classical machine learning algorithms across the entire novel features model, indicating the usefulness and suitability of the proposed methodology. Lastly, the findings from extensive experiments on the physicians’ reviews dataset will guide the doctors to demonstrate the implication of the proposed system for improving the D–P relationship.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call