ANTECEDENTS INFLUENCING THE SUSTAINED INTENTION TO UTILIZE GENERATIVE ARTIFICIAL INTELLIGENCE CHATBOTS WITHIN THE DOMAINS OF HOSPITALITY AND TOURISM

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Generative artificial intelligence (GenAI) has evolved as a distinct aspect of artificial intelligence technology, with the lofty goal of imbuing robots with human-like reasoning and behavior. Although artificial intelligence and its applications, such as chatbots, have only recently been introduced, these technologies will soon become indispensable in the hospitality and tourism industries. It is imperative to understand the factors influencing tourists' intentions to persist in utilizing GenAI chatbots for hotel bookings and travel-related activities. Utilizing the framework of social cognitive theory, the present research examines the relationships among the strengths of GenAI chatbots, perceived advantage, technical self-efficacy, perceived personalization, and the intention to continue using chatbots. A survey was administered through the Prolific platform to gather data from a sample of 450 tourists residing in the United States of America. The collected data were subjected to analysis through partial least squares–based structural equation modeling utilizing SmartPLS 4 software. Our findings revealed that perceived advantage did not serve as the predominant catalyst for tourists' continued intention to engage with chatbots. Rather, our results suggest a nuanced understanding of user perceptions, wherein intrinsic factors (e.g ., technical self-efficacy) and experiential benefits (e.g., perceived personalization) surpass traditional perceptions of technological superiority. Designers and developers ought to enhance chatbots that offer tourists tailored suggestions, complemented by visual elements such as images, videos, and virtual tours of recommended locations. Hospitality and tourism authorities should provide financial and information support for designers of GenAI technologies such as chatbots because chatbots could offer recommendations for booking inquiries, making reservations for accommodation and other facilities, and providing advices for travel places and tourism sites together with immediate and tailored information.

Similar Papers
  • Research Article
  • Cite Count Icon 28
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Book Chapter
  • Cite Count Icon 1
  • 10.4018/979-8-3693-3691-5.ch006
Convergence of Generative Artificial Intelligence (AI)-Based Applications in the Hospitality and Tourism Industry
  • Sep 13, 2024
  • Amrik Singh

Generative artificial intelligence (GAI) offers important opportunities for the hospitality and tourism (HT) industry in the context of operations, design, marketing, destination management, human resources, revenue management, accounting and finance, strategic management, and beyond. However, implementing GAI in HT contexts comes with ethical, legal, social, and economic considerations that require careful reflection by HT firms. The hospitality and tourism sector has witnessed phenomenal growth in customer numbers during the post-pandemic times. This growth has been accompanied by the use of technologies in customer interface and backend activities, including the adoption of self-serving technologies. This study highlights the potential challenges of implementing such technologies from the perspectives of companies, customers and regulators. This study aims to analyze the existing practices and challenges and establish a research agenda for implementing generative artificial intelligence (AI) and similar tools in the hospitality and tourism industry.

  • Research Article
  • Cite Count Icon 16
  • 10.1162/daed_e_01897
Getting AI Right: Introductory Notes on AI &amp; Society
  • May 1, 2022
  • Daedalus
  • James Manyika

Getting AI Right: Introductory Notes on AI &amp; Society

  • Conference Article
  • 10.54941/ahfe1004960
Democracy and Artificial General Intelligence
  • Jan 1, 2024
  • Elina Kontio + 1 more

We may have to soon decide what kind of Artificial General Intelligence (AGI) computers we will build and how they will coexist with humans. Many predictions estimate that artificial intelligence will surpass human intelligence during this century. This poses a risk to humans: computers may cause harm to humans either intentionally or unintentionally. Here we outline a possible democratic society structure that will allow both humans and artificial general intelligence computers to participate peacefully in a common society.There is a potential for conflict between humans and AGIs. AGIs set their own goals which may or may not be compatible with the human society. In human societies conflicts can be avoided through negotiations: all humans have the about the same world view and there is an accepted set of human rights and a framework of international and national legislation. In the worst case, AGIs harm humans either intentionally or unintentionally, or they can deplete the human society of resources.So far, the discussion has been dominated by the view that the AGIs should contain fail-safe mechanisms which prevent conflicts with humans. However, even though this is a logical way of controlling AGIs we feel that the risks can also be handled by using the existing democratic structures in a way that will make it less appealing to AGIs (and humans) to create conflicts.The view of AGIs that we use in this article follows Kantian autonomy where a device sets goals for itself and has urges or drives like humans. These goals may conflict with other actors’ goals which leads to a competition for resources. The way of acting and reacting to other entities creates a personality which can differ from AGI to AGI. The personality may not be like a human personality but nevertheless, it is an individual way of behaviour.The Kantian view of autonomy can be criticized because it neglects the social aspect. The AGIs’ individual level of autonomy determines how strong is their society and how strongly integrated they would be with the human society. The critic of their Kantian autonomy is valid, and it is here that we wish to intervene.In Kantian tradition, conscious humans have free will which makes them morally responsible. Traditionally we think that computers, like animals lack free will or, perhaps, deep feelings. They do not share human values. They cannot express their internal world like humans. This affects the way that AGIs can be seen as moral actors. Often the problem of constraining AGIs has used a technical approach, placing different checks and designs that will reduce the likelihood of adverse behaviour towards humans. In this article we take another point of view. We will look at the way humans behave towards each other and try to find a way of using the same approaches with AGIs.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 233
  • 10.1057/s41599-020-0494-4
Why general artificial intelligence will not be realized
  • Jun 17, 2020
  • Humanities and Social Sciences Communications
  • Ragnar Fjelland

The modern project of creating human-like artificial intelligence (AI) started after World War II, when it was discovered that electronic computers are not just number-crunching machines, but can also manipulate symbols. It is possible to pursue this goal without assuming that machine intelligence is identical to human intelligence. This is known as weak AI. However, many AI researcher have pursued the aim of developing artificial intelligence that is in principle identical to human intelligence, called strong AI. Weak AI is less ambitious than strong AI, and therefore less controversial. However, there are important controversies related to weak AI as well. This paper focuses on the distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). Although AGI may be classified as weak AI, it is close to strong AI because one chief characteristics of human intelligence is its generality. Although AGI is less ambitious than strong AI, there were critics almost from the very beginning. One of the leading critics was the philosopher Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. However, today one might argue that new approaches to artificial intelligence research have made his arguments obsolete. Deep learning and Big Data are among the latest approaches, and advocates argue that they will be able to realize AGI. A closer look reveals that although development of artificial intelligence for specific purposes (ANI) has been impressive, we have not come much closer to developing artificial general intelligence (AGI). The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world.

  • Research Article
  • 10.1152/advan.00119.2025
Concepts behind clips: cinema to teach the science of artificial intelligence to undergraduate medical students.
  • Dec 1, 2025
  • Advances in physiology education
  • Krishna Mohan Surapaneni

As artificial intelligence (AI) is becoming more integrated into the field of healthcare, medical students need to learn foundational AI literacy. Yet, traditional, descriptive teaching methods of AI topics are often ineffective in engaging the learners. This article introduces a new application of cinema to teaching AI concepts in medical education. With meticulously chosen movie clips from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie, the students were introduced to the primary differences between artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI). This method triggered encouraging responses from students, with learners indicating greater conceptual clarity and heightened interest. Film as an emotive and visual medium not only makes difficult concepts easy to understand but also encourages curiosity, ethical consideration, and higher order thought. This pedagogic intervention demonstrates how narrative-based learning can make abstract AI systems more relatable and clinically relevant for future physicians. Beyond technical content, the method can offer opportunities to cultivate critical engagement with ethical and practical dimensions of AI in healthcare. Integrating film into AI instruction could bridge the gap between theoretical knowledge and clinical application, offering a compelling pathway to enrich medical education in a rapidly evolving digital age.NEW & NOTEWORTHY This article introduces a new learning strategy that employs film to instruct artificial intelligence (AI) principles in medical education. By introducing clips the from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie to clarify artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI), the approach converted passive learning into an emotionally evocative and intellectually stimulating experience. Students experienced enhanced comprehension and increased interest in artificial intelligence. This narrative-driven, visually oriented process promises to incorporate technical and ethical AI literacy into medical curricula with enduring relevance and impact.

  • Research Article
  • 10.70777/si.v1i1.11101
Highlights of the Issue
  • Oct 15, 2024
  • SuperIntelligence - Robotics - Safety &amp; Alignment
  • Kristen Carlson

To emphasize the journal’s concern with AGI safety, we inaugurate Artificial General Intelligence (AGI) by focusing the first issue on Risks, Governance, and Safety &amp; Alignment Methods. Risks The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial Intelligence The most comprehensive AI risk taxonomy — 777 specific risks classified into 43 categories — to date has been created by workers collaborating from a half-dozen institutions. We except 11 key pages from the original 79-page report. Their ‘living’ Repository is online and free to download and share. The authors’ intention is to provide a common frame of reference for AI risks. Slattery et al.’s set of ~100 references is excellent and thorough. Thus, pouring through this study for your own specific interest is an efficient way to get on top of the entire current AI risk literature. The highest of their three taxonomy levels, the Causal Taxonomy, is categorized according to the cause of the risk, Human or AI the intention , Intentional action or Unintentional, and timing — Pre-deployment of the AI system or Post-deployment. The Causal Taxonomy can be used “for understanding how, when, or why risks from AI may emerge.” They also call readers’ attention to the AI Incident Database.[1] The Incident Database publishes a monthly roundup here. AI Risk Categorization Decoded (AIR 2024) By examining 8 government and 16 corporate AI risk policies, Zeng et al. seek to provide an AI risk taxonomy unified across public and private sector methodologies. They present 314 risk categories organized into a 4-level hierarchy. The highest level is composed of System &amp; Operational Risks, Content Safety Risks, Societal Risks, and Legal &amp; Rights Risks. Their first takeaway from their analysis is more categories is advantageous, allowing finer granularity in identifying risks and unifying risk categories across methodologies. Thus, indirectly they argue for the Slattery et al. taxonomy with double the categories. This emphasis on fine granularity parallels a comment made to me by Lance Fortnow, Dean of Illinois Institute of Technology College of Computing, on the diversity and specificity of human laws indicating a similar diversity may be necessary to assure AGI safety, and that recent governance proposals may be simplistic. Indeed, Zeng et al.’s second takeaway is that government AI regulation may need significant expansion. Few regulations address foundation models, for instance. And their third takeaway is that comparing AI risk policies from diverse sources is extremely helpful to develop an overall grasp of the issues – how different organizations conceptualize risk, for instance – and how to move toward international cooperation to manage AI risk. AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies Applying the work just described above, Zeng et al. constructed an AI safety benchmark aligned with their unified view of private and public sector AI risk policy and specifically targeting the gap in regulation of foundation models they uncovered. They develop and test nearly 6000 risky prompts and find inconsistent responses across foundation models. Zeng et al. give examples of foundation model safety failures in response to various prompts. This work seems a significant advance toward an AGI safety certification conducted by an AI industry consortium or an insurance company consortium along the lines of, e.g., UL Solutions (previously Underwriters’ Laboratory). A Comprehensive Survey of Advanced Persistent Threat Attribution We wanted to publish this important article had to pull it due to a license conflict – please see their arXiv preprint. APT [Advanced Persistent Threat] attacks are attack campaigns orchestrated by highly organized and often state-sponsored threat groups that operate covertly and methodically over prolonged periods. APTs set themselves apart from conventional cyber-attacks by their stealthiness, persistence, and precision in targeting. This systematic review by Rani et al. of 137 papers focuses on the increasing development of automated means to detect AI and ML APTs early and identify the malevolent actors involved. They present the Automated Attribution Framework, which consists of 1) collecting the training data of past attacks, 2) preprocessing and enrichment of the training data, 3) the actual training and pattern recognition on the data, and 4) attribution — applying the trained models to identify the malevolent perpetrating actors. The open research questions summarized by Rani et al. lead toward AI taking an increasing role in APT attribution. Governance Excerpts from Aschenbrenner, Situational Awareness I was pointed to Leopold Aschenbrenner’s 165-page missive by Scott Aaronson’s blog, which said he knew Leopold during his sabbatical at OpenAI and recommended people give it a read and take it seriously. The essence of it is that if we extrapolate from recent AI progress, we will have AGI by 2030, and therefore, for national security, a Manhattan Project-style national AI effort, including nationalizing leading private AGI labs, should be mounted. Here we reprint his Part IV, “The Project,” advocating this controversial effort and describing his vision of how it will occur. I recommend anyone concerned about the dangers of AGI, and especially those working toward AGI, read Aschenbrenner’s entire book. Take a look at the Table of Contents preceding our reprint of “The Project.” And we reprint his Ch. V, “Parting Thoughts,” in our Commentary section. Soft Nationalization: How the US Government Will Control AI Labs Aschenbrenner advocates nationalizing leading AI labs into a high-security, top-secret, US federal government project. OK, how, exactly? A perfect complement to Aschenbrenner’s thoughts is given by Deric Cheng and Corin Katzke of Convergence Analysis. They examine how AGI R&amp;D nationalization could happen realistically, effectively, and efficiently. Their report outlines key issues and initial thoughts as a prelude to their own and others’ detailed proposals to come. It is a beautiful piece of work, IMHO. It is not impossible for private companies to develop AGI responsibly and securely, but the main goal of this journal is to make AGI safety the central debate in the AGI community, and the nationalized, Manhattan-style project point of view must be presented. Further, I find Aschenbrenner’s arguments to be persuasive and Cheng and Katzke’s thoughtful outline of how nationalization could actually occur to be convincing, e.g. (pg. 8): The US may be able to achieve its national security goals with substantially less overhead than total nationalization via effective policy levers and regulation… We argue that various combinations of the policy levers listed below will likely be sufficient to meet US national security concerns, while allowing for more minimal governmental intrusion into private frontier AI development. Acceptable Use Policies for Foundation Models Acceptable use policies are legally binding policies that prohibit specific uses of foundation models. Klyman surveys acceptable use policies from 30 developers encompassing 127 specific use restrictions cited in 184 articles. Like Zeng et al. in “AI Risk Categorization Decoded (AIR 2024),” Klyman highlights the inconsistent number and type of restrictions across developers and lack of transparency behind their motivation and enforcement, indicating the need to for developers to create a unified consensus acceptable use policy. The general motivations are to reduce legal and reputational risk. However, standing in the way of developers working to create a unified policy set is the motivation to use restrictions to hinder competition from using proprietary models. Enforcement can also hinder effective use of a foundation model. Acceptable use policies can be categorized into content restrictions (e.g. the top 4: misinformation, harassment, privacy, discrimination) and end use restrictions, e.g. Anthropic’s restriction on “model scraping,” which is someone training their own AI model on prompts and outputs from Anthropic’s model. Another use restriction is scaling up AI-created content distribution such as automated online posting. As with the Zeng et al. articles, Klyman’s article points the way to create a homogeneous acceptable use policy across a diverse AI ecosystem. Steve Omohundro comments: “…the AI labs’ ‘alignment work’ … is all about the AIs rather than their impact on the world. For goodness sake, the Chinese People's Liberation Army has already fine-tuned Meta's Llama 3.1 to promote Chinese military goals! And Meta's response was ‘that's contrary to our acceptable use policy!’" From the article: Without information about how acceptable use policies are enforced, it is not obvious that they are actually being implemented or effective in limiting dangerous uses. Companies are moving quickly to deploy their models and may in practice invest little in establishing and maintaining the trust and safety teams required to enforce their policies to limit risky uses. Safety Methods Benchmark Early and Red Team Often (Executive Summary excerpt) Two leading methods for uncovering AI safety breaches are 1) inexpensive benchmarking against a standardized test suite, such as prompts for large language models, and 2) longer, higher-cost but more informative intensive, interactive testing by human domain experts (“red-teaming”). Barrett et al., from the UC Berkeley Center for Long-Term Cybersecurity, advocate for this two-pronged approach indicated by the article title. They analyze the methods’ potential for eliminating LLM “dual” use, i.e. corrupting LLMs into creating chemical, biological, radiological, nuclear (CBRN) or cyber or other weaponry or attacks, but the methods apply to less dangerous risk testing as well. Essentially Barrett et al. advocate frequent use of benchmarks until a model attains a high safety score, followed by intensive red-teaming to test the model in more depth and yield more accuracy. Their paraphrase of the article title is: Benchmark Early and Often, and Red-Team Often Enough. Against Purposeful Artificial Intelligence Failures A paper that had to be written, and not surprisingly was, by Yampolskiy, who has sought to cover every aspect of AGI risks, is one arguing that intentionally triggering an AI disaster should not be entertained as an option to alert humanity to the danger of AGI. Models That Prove Their Own Correctness Especially in light of Dalrymple et al.’s governance proposal, Toward Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, ‘models that prove their own correctness’ seems especially desirable, if not essential. Dalrymple et al. call for 1) a world model, 2) a safety specification, and 3), a means to verify the safety specification, a highly intriguing proposal, but which falls short of providing an example of such a model or means of verification (we hear that Dalrymple is working on an example). Paradise et al. describe two uses of interactive proof systems (IPS) combined with ML to allow a model to prove its own ‘correctness,’ as specified by the user of the model. The first method requires access to a training set of IPS transcripts (the sequence of interactions between the Verifier and Prover) in which the Verifier accepted the Prover’s probabilistic proof. The second method, Reinforcement Learning from Verifier Feedback (RLVF; note their intentional similarity to Reinforcement Learning from Human Feedback, RLHF) avoids the need for the accepted transcripts (which are in essence an external truth oracle) but only after training on such a verified transcript (its ‘base model’) using transcript learning. From then on it can generate its own emulated verified transcripts. The paper opens the door to other innovative applications of ML to IPS. This is a rather deep paper that requires further analysis to judge the realization of its promise. We look forward to a revised version after its peer review at an unspecified journal. We thank Syed Rafi for the pointer to the paper and Quinn Dougherty for inviting Orr Paradise to his safe AGI reading group. Language-Guided World Models: A Model-Based Approach to AI Control Model-based agents are artificial agents equipped with probabilistic “world models” that are capable of foreseeing the future state of an environment (Deisenroth and Rasmussen, 2011; Schmidhuber, 2015). World models endow these agents with the ability to plan and learn in imagination (i.e., internal simulation)…. Citing Dalrymple et al., Zhang et al. likewise extend the capabilities of world models to increase human control over AI. By adjusting the world model, humans can affect many context-sensitive policies simultaneously. However, for the human-AI interaction to be efficient, the world model must process natural language (NLP); hence, language-guided world models (LWMs). NLP also increases the efficiency of model learning by permitting them to read text. World models increase AI transparency, which NL interaction furthers by allowing humans to query models verbally. As an example, in Sec. 5.3, “Application: Agents that discuss plans with humans,” Zhang et al. describe an agent that uses its LWM to plan a task and then ask a human to review it for safety. Commentary Steve Omohundro, “Progress in Superhuman Theorem Proving?” Our co-founding editor Steve Omohundro is a strong proponent of Provably Safe AI, in which automated theorem-proving will play a major role.[2] Here Steve discusses current developments in using proof to lessen LLM hallucinations, the implications of superhuman theorem-proving for safe AGI and resources for interested readers. On Yampolskiy, “Against Purposeful Artificial Intelligence Failures” Topic Editor Jim Miller, Professor of Economics, Game Theory, and Sociology at Smith College, critiques Roman Yampolskiy’s argument against triggering a deliberate AI failure to wake the world up to AI dangers. Leopold Aschenbrenner, Situational Awareness, “Parting Thoughts” Aschenbrenner dismisses his critics as unrealistic and outlines the core tenets of “AI Realism.” Rowan McGovern, “Unhobbling Is All You Need?” Commentary on Aschenbrenner’s Situational Awareness McGovern questions Aschenbrenner’s fundamental assumption that “unhobbling” alone — “fixing obvious ways in which models are hobbled by default, unlocking latent capabilities and giving them tools, leading to step-changes in usefulness” — will result in his extrapolation of recent AI progress to predict the advent of AGI by 2030. McGovern: “Unhobbling conflates computing power with intelligence.” [1] https://incidentdatabase.ai/. “Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.” [2] Tegmark, M., &amp; Omohundro, S. (2023). Provably safe systems: the only path to controllable AGI. arXiv. Retrieved from https://arxiv.org/abs/2309.01933.

  • Research Article
  • Cite Count Icon 1
  • 10.4467/29567610pib.24.002.19838
Sztuczna inteligencja a bezpieczeństwo państwa
  • Jun 10, 2024
  • Prawo i Bezpieczeństwo
  • Norbert Malec

Technologically advanced artificial intelligence (AI) is making a significant contribution to strengthening national security. AI algorithms facilitate the processing of vast amounts of information, increasing the speed and accuracy of decision-making. Artificial intelligence and machine learning (AI/ML) are crucial for state and integrated hybrid attacks and protecting new threats in cyberspace. Existing AI capabilities have significant potential to impact national security by leveraging existing machine learning technology for automation in labor-intensive activities such as satellite imagery analysis and defense against cyber attacks. This article examines selected aspects of the impact of artificial intelligence on enhancing a state’s ability to protect its interests and its citizens., artificial intelligence through the use of neutron networks, predictive analytics and machine learning algorithms enables security agencies to analyse vast amounts of data and identify patterns indicative of potential threats. Integrating artificial intelligence into surveillance, border control and threat assessment systems enhances the ability to respond preemptively to security challenges. In addition, artificial intelligence algorithms facilitate the processing of vast amounts of information, increasing the speed and accuracy of decision-making by police authorities. The rapid development of AI raises a number of questions for its use in securing not only national security but protecting all citizens. In particular, it is worth answering the question How does artificial intelligence affect national security and clarifying the issue of how law enforcement agencies can use artificial intelligence to maximise the benefits of the new technology in terms of security and protecting communities from rising crime. The analysis is based on a descriptive method in describing the phenomenon; by explaining the concepts and applications of artificial intelligence to determine its role in the national security sphere. An analysis of the usefulness of artificial intelligence in particular in police operations is undertaken, with the aim of defending the thesis that, despite some threats to the protection of human rights from AI, it is becoming the best tool in the fight against all types of crime in the country. Technological advances in AI can also have many positive effects for law enforcement, and useful for law enforcement agencies, for example in facilitating the identification of persons or vehicles, predicting trends in criminal activities, tracking illegal criminal activities or illegal money flows, flagging and responding to fake news. Artificial intelligence (AI) has emerged as one of the biggest threats to information security, but efforts are being made to mitigate this new threat, but also to find solutions on how AI can become an ally in the fight against cyber-security, crime and terrorist threats. Artificial intelligence algorithms search huge datasets of communication traffic, satellite images and social media posts to identify potential cyber security threats, terrorist activities and organized crime. It is advisable, when analyzing the opportunities and threats that AI poses to national and public security, to gain a strategic advantage in the context of rapid technological change and also to manage the many risks associated with AI. The conclusion highlights the impact of AI on national security, creating a range of new opportunities coupled with challenges that government agencies should be prepared for in addressing ethical and security dilemmas. Furthermore, AI improves predictive analytics, thereby enabling security agencies to more accurately anticipate potential threats and enhance their preparedness by identifying vulnerabilities in the national security infrastructure

  • Research Article
  • 10.17509/ijotis.v5i1.82626
The Future of Teaching: Artificial Intelligence (AI) And Artificial General Intelligence (AGI) For Smarter, Adaptive, and Data-Driven Educator Training
  • Nov 21, 2024
  • Indonesian Journal of Teaching in Science
  • Kumar Balasubramanian

The fast evolution of Artificial Intelligence (AI) and the developing Artificial General Intelligence (AGI) capabilities transform how education operates, particularly through its effect on teacher training. AI-based systems provide adaptable learning spaces, and they offer both real-time assessment capabilities and data-driven educational method improvements. With its capability for human-level cognitive operations, AGI creates conditions to transform educator skill advancement processes. The article examines AI and AGI integration within teacher education programs by discussing their practical uses and advantages, together with the encountered challenges and ethical dilemmas. The analysis combines evaluative and creative AI tools like Gradescope and ChatGPT, and Carnegie Learning, with developing capabilities in AGI. The article uses detailed analysis, together with tables, along pictorial representations to show the necessity of achieving optimal teacher training through AI-human balanced cooperation. The research finds that AI brings efficiency benefits, but AGI's prospective function needs strict governance together with educational alignment, to maintain ethical, unbiased teacher education.

  • Research Article
  • Cite Count Icon 21
  • 10.1080/02508281.2023.2287799
AI-powered ChatGPT in the hospitality and tourism industry: benefits, challenges, theoretical framework, propositions and future research directions
  • Jan 6, 2024
  • Tourism Recreation Research
  • Raouf Ahmad Rather

Generative artificial intelligence (AI) and smart/e-tourism provide imperative opportunities to service industries; however, the implementation of ChatGPT in the tourism and hospitality industry is limited, which extends different considerations/challenges that need vigilant reflection. Based on this significance and research gap, we thus develop a theoretical framework which suggests different sets of key research propositions in AI technology-powered ChatGPT. A widespread literature review and practices were conducted to investigate the conceptual advancements/developments on generative AI-powered technologies including ChatGPT, chatbot in marketing, tourism, hospitality and information management. The proposed framework suggests generative AI technology-powered ChatGPT develops customer’s interaction-based conditions including experience, engagement/trust, attachment, satisfaction/service quality, attitude change and operational efficiency, which consequently affect their strategic outcomes including behaviours, subjective/psychological well-being, happiness and performance. Thus, this research note suggests theoretical/practical implications to provide an extensive future-research roadmap on AI technology-powered ChatGPT and also recommends transformative opportunities, challenges and benefits in tourism, hospitality and marketing management.

  • Book Chapter
  • 10.2174/9789815165739123010004
Artificial General Intelligence; Pragmatism or an Antithesis?
  • Nov 23, 2023
  • K Ravi Kumar Reddy + 2 more

Artificial intelligence is promoted by means of incomprehensible advocacy through business majors that cannot easily be equated with human consciousness and abilities. Behavioral natural systems are quite different from language models and numeric inferences. This paper reviews through centuries of evolved human knowledge, and the resolutions as referred through the critics of mythology, literature, imagination of celluloid, and technical work products, which are against the intellect of both educative and fear mongering. Human metamorphic abilities are compared against the possible machine takeover and scope of envisaged arguments across both the worlds of ‘Artificial Intelligence’ and ‘Artificial General Intelligence’ with perpetual integrations through ‘Deep Learning’ and ‘Machine Learning’, which are early adaptive to ‘Artificial Narrow Intelligence’ — a cross examination of hypothetical paranoid that is gripping humanity in modern history. The potentiality of a highly sensitive humanoid and sanctification to complete consciousness at par may not be a near probability, but social engineering through the early stages in life may indoctrinate biological senses to a much lower level of ascendancy to Artificial Narrow Intelligence — with furtherance in swindling advancement in processes may reach to a pseudo-Artificial Intelligence {i}. There are no convincing answers to the discoveries from ancient scriptures about the consciousness of archetypal humans against an anticipated replication of a fulfilling Artificial Intelligence {ii}. Human use of lexicon has been the focal of automata for the past few years and the genesis for knowledge, and with the divergence of languages and dialects, scores of dictionaries and tools that perform bidirectional voice and text — contextual services are already influencing the lives, and appeasement to selective humanly incidentals is widely sustainable today {iii}. Synthesizing and harmonizing a pretentious labyrinthine gizmo is the center of human anxiety, but only evaluative research could corroborate that tantamount to genetic consciousness.

  • Discussion
  • Cite Count Icon 251
  • 10.1108/ijchm-05-2023-0686
Leveraging ChatGPT and other generative artificial intelligence (AI)-based applications in the hospitality and tourism industry: practices, challenges and research agenda
  • Jun 7, 2023
  • International Journal of Contemporary Hospitality Management
  • Yogesh K Dwivedi + 3 more

PurposeThe hospitality and tourism sector has witnessed phenomenal growth in customer numbers during the postpandemic times. This growth has been accompanied by the use of technologies in customer interface and backend activities, including the adoption of self-serving technologies. This study aims to analyze the existing practices and challenges and establish a research agenda for the implementation of generative artificial intelligence (AI) (such as ChatGPT) and similar tools in the hospitality and tourism industry.Design/methodology/approachThis study analyzes the existing literature and practices. This study draws upon these practices to outline a novel research agenda for scholars and practitioners working in this domain.FindingsThe integration of generative AI technologies, such as ChatGPT, will have a transformational impact on the hospitality and tourism industry. This study highlights the potential challenges of implementing such technologies from the perspectives of companies, customers and regulators.Research limitations/implicationsThis study serves as a reference material for those who are planning to use generative AI tools like ChatGPT in their hospitality and tourism businesses. This study also highlights potential pitfalls that ChatGPT-enabled systems may encounter during service delivery processes.Originality/valueThis study is a pioneering work that assesses the applications of ChatGPT in the hospitality and tourism industry. This study highlights the potential and challenges in implementing ChatGPT within the hospitality and tourism industry.

  • Research Article
  • 10.3390/pr13051413
Artificial General Intelligence (AGI) Applications and Prospect in Oil and Gas Reservoir Development
  • May 6, 2025
  • Processes
  • Jiulong Wang + 3 more

The cornerstone of the global economy, oil and gas reservoir development, faces numerous challenges such as resource depletion, operational inefficiencies, safety concerns, and environmental impacts. In recent years, the integration of artificial intelligence (AI), particularly artificial general intelligence (AGI), has gained significant attention for its potential to address these challenges. This review explores the current state of AGI applications in the oil and gas sector, focusing on key areas such as data analysis, optimized decision and knowledge management, etc. AGIs, leveraging vast datasets and advanced retrieval-augmented generation (RAG) capabilities, have demonstrated remarkable success in automating data-driven decision-making processes, enhancing predictive analytics, and optimizing operational workflows. In exploration, AGIs assist in interpreting seismic data and geophysical surveys, providing insights into subsurface reservoirs with higher accuracy. During production, AGIs enable real-time analysis of operational data, predicting equipment failures, optimizing drilling parameters, and increasing production efficiency. Despite the promising applications, several challenges remain, including data quality, model interpretability, and the need for high-performance computing resources. This paper also discusses the future prospects of AGI in oil and gas reservoir development, highlighting the potential for multi-modal AI systems, which combine textual, numerical, and visual data to further enhance decision-making processes. In conclusion, AGIs have the potential to revolutionize oil and gas reservoir development by driving automation, enhancing operational efficiency, and improving safety. However, overcoming existing technical and organizational challenges will be essential for realizing the full potential of AI in this sector.

More from: Geojournal of Tourism and Geosites
  • Research Article
  • 10.30892/gtg.61330-1540
A SUSTAINABILITY MODEL OF GEOPARKS APPLIED TO THE TUNGURAHUA VOLCANO GEOPARK
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Catalina Verdugo + 7 more

  • Research Article
  • 10.30892/gtg.61302-1512
DIGGING UP RURAL COMMUNITY-BASED TOURISM (CBT) IN DEVELOPING COUNTRY, INDONESIA’S FRAMEWORK FINDING
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Setiawan Priatmoko + 4 more

  • Research Article
  • 10.30892/gtg.61309-1519
URBAN TOURISM AND UNIVERSITY SPORTS. CASE STUDY ON THE IMPACT OF THE NATIONAL UNIVERSITY FOOTBALL CHAMPIONSHIP ON THE DESTINATION OF ORADEA, ROMANIA
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Grigore Vasile Herman + 10 more

  • Research Article
  • 10.30892/gtg.61315-1525
MULTI-FACTOR GIS MODELING FOR SOLID WASTE DUMPSITE SELECTION IN BOUMEDFAA, ALGERIA
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Tina Benferhat + 1 more

  • Research Article
  • 10.30892/gtg.61317-1527
THE IMPACT AND DYNAMICS OF SOCIAL FACTORS ON THE DEVELOPMENT OF HIGHER EDUCATION AND LABOR MARKET INTEGRATION IN KOSOVO
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Dardan Lajçi + 1 more

  • Research Article
  • 10.30892/gtg.61305-1515
TECHNOGENESIS AS A DRIVER IN THE DEVELOPMENT OF RECREATIONAL AREAS FOR SUSTAINABLE DEVELOPMENT AND BIODIVERSITY CONSERVATION
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Zharas G Berdenov + 6 more

  • Research Article
  • 10.30892/gtg.61316-1526
METHODOLOGY FOR ASSESSING THE NATURAL BLOCK OF TOURIST AND RECREATIONAL POTENTIAL OF THE STUDIED TERRITORIES
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Pavel S Dmitriyev + 6 more

  • Research Article
  • 10.30892/gtg.61301-1511
DESIGNING EFFECTIVE SIGNAGE SYSTEMS FOR TIBET'S POTALA PALACE: A MIXED-METHOD APPROACH TO ENHANCING TOURIST EXPERIENCE
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Meiqi Wu + 2 more

  • Research Article
  • 10.30892/gtg.61310-1520
ANTECEDENTS INFLUENCING THE SUSTAINED INTENTION TO UTILIZE GENERATIVE ARTIFICIAL INTELLIGENCE CHATBOTS WITHIN THE DOMAINS OF HOSPITALITY AND TOURISM
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Sawsan Haider Abdullah Khreis + 4 more

  • Research Article
  • 10.30892/gtg.61311-1521
FORECASTING TECHNOLOGY ACCEPTANCE IN TOURISM AND HOSPITALITY: LESSONS FROM AKMOLA REGION IN KAZAKHSTAN
  • Sep 30, 2025
  • Geojournal of Tourism and Geosites
  • Yerkegul Dyussekeyeva + 5 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon