Trust on artificial intelligence (AI) is a major concern in the contemporary computing paradigms. Studies show that AI systems may outsmart humans, leading to an ultimate extinction of mankind. Therefore, the behavior of these systems must be controlled to avert potential use by bad actors. Recommender systems, which are variant of AI products, learn shoppers past data and predict items that shoppers may prefer. This helps in identifying items that may be recommended to the active user. Studies indicate that classical recommender systems allow untrustworthy data, tempting unscrupulous dealers to misdirect the learning process. This action potentially defrauds buyers. Our study introduces trust adjustment factor into the AI learning pipeline. We conducted experiments to test the difference in robustness of the trust-enhanced collaborative filtering recommendation algorithm against the classical counterpart. Prediction shift and hit ratios for the two sets of algorithms were measured when subjected to various forms of profile injection attacks. We found that the trust-enhanced variant of the algorithm significantly outperforms classical collaborative filtering recommendation in terms of robustness by up to 52% when measured by prediction shift and by up to 18% when measured by hit ratio. Confirmed by t-test, results suggest that embedding trust adjustment factor into recommender systems improves its robustness.
Read full abstract