Abstract

There is a “timing optimism” that artificial general intelligence will be achieved soon, but some literature has suggested that people have mixed feelings about its overall impact. This study expanded their findings by investigating how Taiwanese university students perceived the overall impact of high-level-machine-intelligence (HLMI) in three areas: a set of 12 human professions, autonomous vehicles, and smart homes. Respondents showed a relatively more positive attitude, with a median answer of “on balance good”, toward HLMI’s development corresponding to those occupations having a higher probability of automation and computerization, and a less positive attitude, with a median of “more or less neutral”, toward professions involving human judgment and social intelligence, and especially creativity, which had a median of “on balance bad”. On the other hand, they presented a highly positive attitude toward the AI application of the smart home, while they demonstrated relatively more reservation toward autonomous vehicles. Gender, area of study, and a computer science background were found as predictors in many cases, whereas traffic benefits, and safety and regulation concerns, among others, were found as the most significant predictors for the overall impact of autonomous vehicles, with comfort and support benefits being the most significant predictor for smart homes. Recommendations for educators, policy makers, and future research were provided.

Highlights

  • IntroductionModern society is witnessing a recent resurgence in “artificial intelligence (AI) optimism”, but some researchers [1] have pointed out there is a distinction between “timing optimism”, or the belief that artificial general intelligence (AGI) will be achieved soon (evolving from current stages of artificial narrow intelligence) and optimism about the beneficial effects of human-level AGI

  • Modern society is witnessing a recent resurgence in “artificial intelligence (AI) optimism”, but some researchers [1] have pointed out there is a distinction between “timing optimism”, or the belief that artificial general intelligence (AGI) will be achieved soon and optimism about the beneficial effects of human-level AGI

  • The technical development and adoption study was well-established for the topics of autonomous vehicles and smart homes, but this study presented an initial picture of how people assess their overall impact on mankind, and median answers of “on balance good” were found for autonomous vehicles and smart homes

Read more

Summary

Introduction

Modern society is witnessing a recent resurgence in “artificial intelligence (AI) optimism”, but some researchers [1] have pointed out there is a distinction between “timing optimism”, or the belief that artificial general intelligence (AGI) will be achieved soon (evolving from current stages of artificial narrow intelligence) and optimism about the beneficial effects of human-level AGI. Müller and Bostrom [2] surveyed experts’ opinions on the future progress of AI. They assessed, on the one hand, timing for both “high-level-machine-intelligence” (HLMI), corresponding to AGI in this study, and for HLMI greatly surpassing the performance of every human in most professions, indicated as artificial superintelligence (ASI). They asked the participants to evaluate the overall positive and negative impacts of AGI on humanity. They employed the terminology of HLMI to address “human-level-intelligence”, as being able to perform most human professions at least as well as a typical human. For the impact of ASI, there was a one chance in two that this development would turn out to be “extremely good” or “on balance good”, while one out of three would be “on balance bad” or “extremely bad” for mankind

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call