Abstract

ABSTRACT In order to make AI tools for spam review detection to be more trustworthy, two types of explanation were designed, the Text-AI tool and the Behaviour-AI tool. The Text-AI tool establishes detection criteria based on the textual features of reviews, while the Behaviour-AI tool establishes detection criteria based on the behavioural features of reviewers. The trust of younger (20–26 years) and older (50–78 years) adults in AI tools and the changes in credibility judgments for reviews and overall attitude toward the product made by them based on AI tools’ detection results were measured. We mainly found that: (i) Almost all the older participants reported that they trust AI tools’ prediction, but 48.7% of them would abandon AI when AI’s prediction was different from their own judgment. (ii) Younger adults showed a higher trust in the Behaviour-AI tool than in the Text-AI tool, especially when the AI tool detected more spam reviews than themselves. (iii) Regardless of age and types of explanation, participants perceived AI tools as more competent and benevolent when AI tools outperformed participants by detecting unexpected spam reviews.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call