Abstract

Exposure to alcohol content in media increases alcohol consumption and related harm. With exponential growth of media content, it is important to use algorithms to automatically detect and quantify alcohol exposure. Foundation models such as Contrastive Language-Image Pretraining (CLIP) can detect alcohol exposure through Zero-Shot Learning (ZSL) without any additional training. In this paper, we evaluated the ZSL performance of CLIP against a supervised algorithm called Alcoholic Beverage Identification Deep Learning Algorithm Version-2 (ABIDLA2), which is specifically trained to recognise alcoholic beverages in images, across three tasks. We found ZSL achieved similar performance compared to ABIDLA2 in two out of three tasks. However, ABIDLA2 outperformed ZSL in a fine-grained classification task in which determining subtle differences among alcoholic beverages (including containers) are essential. We also found that phrase engineering is essential for improving the performance of ZSL. To conclude, like ABIDLA2, ZSL with little phrase engineering can achieve promising performance in identifying alcohol exposure in images. This makes it easier for researchers, with little or no programming background, to implement ZSL effectively to obtain insightful analytics from digital media. Such analytics can assist researchers and policy makers to propose regulations that can prevent alcohol exposure and eventually prevent alcohol consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call