Data Has Entered the Chat: How Data Workers Conduct Exploratory Visual Analytic Conversations with GenAI Agents

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

We investigate the potential of leveraging the code-generating capabilities of Large Language Models (LLMs) to support exploratory visual analysis (EVA) via conversational user interfaces (CUIs). We developed a technology probe that was deployed through two studies with a total of 50 data workers to explore the structure and flow of visual analytic conversations during EVA. We analyzed conversations from both studies using thematic analysis and derived a state transition diagram summarizing the conversational flow between four states of participant utterances ( Analytic Tasks , Editing Operations , Elaborations and Enrichments , and Directive Commands ) and two states of Generative AI (GenAI) agent responses (visualization, text). We describe the capabilities and limitations of GenAI agents according to each state and transitions between states as three co-occurring loops: analysis elaboration, refinement, and explanation. We discuss our findings as future research trajectories to improve the experiences of data workers using GenAI. The code and data are available at https://osf.io/6wxpa .

Similar Papers
  • Research Article
  • Cite Count Icon 4
  • 10.9781/ijimai.2024.02.008
A Cybernetic Perspective on Generative AI in Education: From Transmission to Coordination.
  • Mar 1, 2024
  • International Journal of Interactive Multimedia and Artificial Intelligence
  • Dai Griffiths + 3 more

The recent sudden increase in the capabilities of Large Language Models (LLMs), and generative AI in general, has astonished education professionals and learners. In formulating a response to these developments, educational institutions are constrained by a lack of clarity concerning human-machine communication and its relationship to models of education. Ideas and models from the cybernetic tradition can help to fill this gap. Two paradigms are distinguished: (1) the transmission paradigm (combining the model of learning implied by the instruments and processes of formal education and the conduit model of communication), and (2) the coordination paradigm (combining the constructivist model of learning and the coordination model of communication). It is proposed that these paradigms have long coexisted in educational practice in a modus vivendi, which is disrupted by LLMs. If an LLM can pass an examination, then from within the transmission paradigm this can only understood as demonstrating that the LLM has indeed learned and understood the material being assessed. At the same time, we know that LLMs do not in fact have the capacity to learn and understand, but rather generate a simulacrum of intelligence. It is argued that this paradox prevents educational institutions from formulating a coherent response to generative AI systems. However, within the coordination paradigm the interactions of LLMs and education institutions can be more easily understood and can be situated in a conversational model of learning. These distinctions can help institutions, educational leaders, and teachers, to frame the complex and nuanced questions raised by GenAI, and to chart a course towards its effective use in education. More specifically, they indicate that to benefit fully from the capabilities of generative AI education institutions need to recognize the validity of the coordination paradigm and adapt their processes and instruments accordingly.

  • Research Article
  • 10.55041/ijsrem46621
How Generative AI Can Improve Enterprise Data Management
  • Apr 28, 2025
  • INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Vivek Prasanna Prabu

Generative AI is reshaping the enterprise technology landscape, offering intelligent automation, insight generation, and contextual understanding capabilities that redefine how businesses handle data. Enterprise data management (EDM) - once constrained by rigid architectures, manual processing, and fragmented governance - can now evolve into a dynamic, self-improving ecosystem through the integration of generative AI. With organizations generating petabytes of data from operations, customer interactions, supply chains, and IoT devices, the need for scalable and intelligent data handling systems has never been greater. Generative AI models, including large language models (LLMs) and multimodal transformers, provide new tools for data ingestion, cleansing, integration, transformation, synthesis, and summarization. By applying generative AI to enterprise data workflows, companies can enhance metadata enrichment, automate data cataloging, improve data lineage tracking, and simplify data governance. These capabilities increase data discoverability, trust, and compliance—core principles of modern data management. Additionally, generative AI supports natural language querying, automates report writing, and generates synthetic data for training and simulation, boosting data availability and operational speed. While generative AI brings immense promise, it also raises concerns around hallucination, model transparency, data privacy, and regulatory compliance. Ensuring responsible AI adoption requires rigorous validation, bias mitigation, and alignment with existing data governance policies. Nonetheless, enterprises that embrace generative AI can unlock superior decision-making, improve productivity, and democratize data access across technical and non-technical users. This white paper explores the opportunities, challenges, architectural considerations, and best practices for embedding generative AI into enterprise data management. Through industry examples and forward- looking analysis, it offers a roadmap for transforming data operations and maximizing enterprise intelligence in the era of AI. Keywords: Generative AI, Enterprise Data Management, LLMs, Data Governance, Metadata, Data Cataloging, Synthetic Data, Data Lineage, Natural Language Processing, Responsible AI

  • Research Article
  • Cite Count Icon 1
  • 10.55041/ijsrem32623
Real Time Inventory Management System powered by Generative User Interface
  • May 2, 2024
  • INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Omkar Patil

This research explores the developing a real time inventory management system powered by a generative user interface. We are leveraging large language models like GPT4, Claude 3, and Google Gemini that support tool calling or function calling, and integrating it with the modern frontend frameworks like Next js that support streaming React Server Component (RSC), the proposed system enables interaction with the inventory through natural language prompts. We are using PostgreSQL as a choice of database and server actions are used to interact with the database in real time. The system composes and renders appropriate react components based on user prompt, providing a personalized user experience. The research discusses the system's architecture, implementation, and potential impact on inventory management systems. It showcases the potential of Large Language Models (LLMs) and conversational interfaces in enhancing enterprise software user experiences. Key Words: Inventory Management System, Generative User Interface, Generative AI, Large Language Models, Conversational Interface, Natural Language Processing

  • Research Article
  • 10.1200/jco.2024.42.16_suppl.e13623
Generative AI enhanced with NCCN clinical practice guidelines for clinical decision support: A case study on bone cancer.
  • Jun 1, 2024
  • Journal of Clinical Oncology
  • Yanshan Wang + 3 more

e13623 Background: Bone cancer is a complex and challenging disease to diagnose and treat in clinical practice. Recently, generative AI, especially large language models (LLMs), has demonstrated potential as a decision support tool for cancer. However, most implementations have overlooked the integration of available cancer guidelines, such as the NCCN Bone Cancer Guidelines, in fine-tuning the outputs of generative AI models. Incorporating these guidelines into LLMs presents an opportunity to harness the extensive clinical knowledge they contain and improve the decision-support capabilities of the model. Methods: In this study, the aim is to enhance the LLM with cancer clinical guidelines to enable accurate medical decisions and personalized treatment recommendations. Therefore, we introduce a novel method for incorporating the NCCN Bone Cancer Guidelines into LLMs using a Binary Decision Tree (BDT) approach. The approach involves constructing a BDT based on NCCN Bone Cancer Guidelines, where internal nodes represent decision points from the Guidelines, and leaf node signify final treatment suggestions. Then the LLM makes decision at each internal node, considering a given patient's characteristics, and guides toward a treatment recommendation in the leaf node. To assess the efficacy of Guideline-enhanced LLMs, an oncologist from our team created 11 hypothetical osteosarcoma patients’ medical progress notes. Each note contains their demographics, medical history, current illness, physical exams, diagnostic tests. We tested three LLMs in the implementation (GPT-4, GPT-3.5, and PaLM 2) and compared the LLM-generated treatment recommendations with the gold standard treatment across four runs with different random seeds (random seeds is a setting to control the LLM outputs). The results are reported as the average of four runs. The original LLMs are used as baseline methods for comparison. Results: The table below provides a comparison between the performance of original LLMs and those augmented with cancer guidelines for osteosarcoma treatment recommendations. We can observe that the PaLM 2 model demonstrated superior performance compared to its counterparts, underscoring the effectiveness of integrating cancer guidelines into LLMs for decision support. Conclusions: The clinical decision support capabilities of the LLMs are promising when enhanced by NCCN Bone Cancer Guidelines using our approach. To fully exhibit the potential of our proposed method as a clinical decision support tool, further investigation into other subtypes of bone cancer should be conducted in the future study. [Table: see text]

  • Research Article
  • Cite Count Icon 3
  • 10.69554/dmiv5161
Customer journey optimisation using large language models: Best practices and pitfalls in generative AI
  • Dec 1, 2023
  • Applied Marketing Analytics: The Peer-Reviewed Journal
  • Vaikunth Thukral + 3 more

Today's business environment is moving faster than ever, and the expressive and adaptive capabilities of generative AI (GenAI) and large language models (LLMs) are redefining the enterprise rails of tomorrow. Given the abundance of industry hype, investor expectations and leadership pressure, the initial impulse is to ‘get in the game’. But how does one implement initiatives that drive business outcomes within ethical parameters while avoiding technical pitfalls? Marketers need practical guidance to navigate through these changes. In this paper, the authors examine multiple considerations for deployment of GenAI in marketing and customer experience. How does the marketer decide on which initiatives and opportunities to begin with? Which use cases will drive value as the organisation adapts to deploying these new capabilities? Once a marketer has identified the opportunities to capitalise on through GenAI, how is the capability deployed? There are a variety of approaches that can be considered given the level of organisational capability with AI and resource levels to be applied. As with any cutting-edge capability, there are potential missteps that must be avoided to ensure success. This paper provides some insight based on practical experiences to date that cover ethical, technical and process concerns. The paper presents thoughtful approaches to the deployment of LLMs and GenAI that can result in concrete ROI and reduced risk even in this early stage of adoption. With this information, marketers can be prepared to confidently begin their journey using GenAI to transform their customer experience and drive enterprise value for their organisations.

  • Research Article
  • 10.55041/ijsrem37369
The Future of Smart Home Security: Generative AI and LLMs for Intelligent Event Detection and Personalized Notifications
  • Nov 10, 2024
  • INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Sibin Thomas

Abstract—Smart home security cameras are becoming more common, but their usefulness can be diminished by notification fatigue from too many alerts about minor incidents. This paper examines the gaps of existing event detection and notification systems in security cameras and then recommends using Generative AI and Large Language Models (LLMs) to add intelligence which would improve user experience. Generative AI can be leveraged to classify events more accurately and assist with anomaly detection. LLMs can further be used to create notifications that are tailored to the context and personalized to users behavior, helping to reduce notification fatigue and provide meaningful user alerts. The paper also looks into wider applications of these technologies to add intelligence and improve other related experiences like automated video summarization, proactive security measures, and improved privacy controls. The integration of Generative AI and LLMs with smart home security camera systems advances the smart cameras capabilities and offers enhanced security, personalized user experiences. Keywords—Smart home security, Generative AI, Large Language Models (LLMs), Event detection, Anomaly detection, Notification fatigue, Context-aware notifications, Personalized security, Reinforcement Learning from Human Feedback (RLHF), Internet of Things (IoT).

  • Conference Article
  • 10.2118/221883-ms
Domain Driven Methodology Adopting Generative AI Application in Oil and Gas Drilling Sector
  • Nov 4, 2024
  • Daria Ponomareva + 5 more

In dynamic landscape of oil and gas drilling, Generative Artificial Intelligence (Generative AI) emerges as the indispensable ally, leveraging historical drilling data to revolutionize operational efficiency, mitigate risks, and empower informed decision-making. Existing Generative AI methods and tools, such as Large Language Models (LLMs) and agents, require tuning and customization to the oil and gas drilling sector. Applying Generative AI in drilling confronts hurdles such as ensuring data quality and navigating the complexity of operations. A methodology integrating Generative AI into drilling demands is comprehensive and interdisciplinary. Agile strategy revolves around constructing a network of specialized agents of LLMs, meticulously crafted to understand industry-specific terminology and intricate operational relationships rooted in drilling domain expertise. Every agent is linked to manuals, standards, specific operational drilling data source and it has unique instructions optimizing computational efficiency and driving cost savings. Moreover, to ensure cost-effectiveness, LLMs are selectively employed, while repetitive user inquiries are addressed through data retrieval from an aggregated storage. Consistent responses to user queries are provided through text and graphs revealing insights from drilling operations, standards, manuals, practices, and lessons learned. Applied methodology efficiently navigates inside the pre-processed user database relying on custom agents developed. Communication with the user is set in the form of chat framed within a web application, and queries on the database about hundreds of wells are answered in less than a minute. Methodology can analyze data and graphs by comparing Key Performance Indicators (KPIs). A wide range of graph output is represented by bar charts, scatter plots, and maps, including self-explaining charts like Time versus Depth Curve (TVD) with Non-Productive Time (TVD) events marked with details underneath. Understanding the data content, data preparation steps, and user needs is fundamental to a successful methodology application. The proposed Generative AI methodology is not just a tool for data interpretation, but a catalyst for real-time decision-making in complex drilling environments. Its integration into oil and gas drilling operations signifies a pivotal advancement, showcasing its transformative potential in revolutionizing the industry's landscape. This approach leads to notable cost reductions, improved resource utilization, and increased productivity, paving the way for a new era in drilling operations. A method driven by selective, cost-effective, and domain specific LLM agents stands poised to revolutionize drilling operations, seamlessly integrating generative AI to amplify efficiency and propel informed decision-making within the oil and gas drilling sector.

  • Research Article
  • Cite Count Icon 1
  • 10.1152/physiol.2024.39.s1.2081
Leveraging the power of generative AI: a case study on feedback analysis of student evaluation in an undergraduate physiology practical course
  • May 1, 2024
  • Physiology
  • Angelina Fong + 3 more

Student surveys with Likert scales and open responses are key to gauging the student experience in educational institutions. However, the thematic analysis of open responses is time-consuming, delaying feedback. This study aims to evaluate the effcacy of ChatGPT-4, a generative AI large language model (LLM) to streamline thematic analysis of student perception surveys. We hypothesise that LLMs can expedite the process, however, human intervention remains essential. The study focused on a 2nd-year physiology course’s and evaluated comparing online vs face-to-face (F2F) delivery, to determine if practical classes could successfully be delivered to students online without compromising the delivery of the desired skills and learning outcomes. Data from six cohorts were included (2019-2022); three semesters online and three F2F. Overall grades, and grades from individual written assessments requiring data analysis and critical thinking showed no difference between the different delivery modes, indicating that major learning outcomes are maintained in online delivery. Student perception was analysed from an online cohort (Semester 2, 2022). Analysis of the Likert data from the student survey from an online cohort (response rate: 40/202) found that students strongly agreed that the class was enjoyable (83% agreement) and the online tools and software were easy to use (83% agreement). Thematic analysis was performed on the open text responses using a LLM (ChatGPT-4) guided by a structured thematic analysis framework and was conducted in three phases: coding responses, collating codes into themes, and visualizing these themes. Each phase required precise prompt engineering to ensure the outputs were accurate and relevant. Thematic analysis using ChatGPT-4 identified that students enjoyed the social aspects of the teamwork and collaboration. The students found the online and software tools easy to use due to rapid feedback from instructors. Altogether, this produced a positive experience in their online learning experiments. A significant advantage to using ChatGPT-4 is the rapid processing of the thematic analysis and alleviate the burdensome aspects of qualitative analysis. Thus, allowing for the timely extraction of nuanced findings provided by qualitative data, and ensuring that student feedback can be effectively addressed. While the results showed that ChatGPT-4 was largely successful in processing the qualitative data, human oversight was necessary to correct minor errors and ensure logical consistency. In addition, LLMs like ChatGPT-4 cannot operate in isolation; human involvement are imperative in making evaluative judgments and checking for hallucinations. Nevertheless, we present a framework for a collaborative human-LLM approach to qualitative analysis of student evaluations to provide more timely feedback and action. The increase rapidity in feedback will help alleviate the student believe that their feedback goes unread and unheeded, thereby improving student outcomes. This is the full abstract presented at the American Physiology Summit 2024 meeting and is only available in HTML format. There are no additional versions or additional content available for this abstract. Physiology was not involved in the peer review process.

  • Research Article
  • 10.3390/app15031119
An Automated Hierarchy Method to Improve History Record Accessibility in Text-to-Image Generative AI
  • Jan 23, 2025
  • Applied Sciences
  • Hui-Jun Kim + 3 more

This study aims to enhance access to historical records by improving the efficiency of record retrieval in generative AI, which is increasingly utilized across various fields for generating visual content and gaining inspiration due to its ease of use. Currently, most generative AIs, such as Dall-E and Midjourney, employ conversational user interfaces (CUIs) for content creation and record retrieval. While CUIs facilitate natural interactions between complex AI models and users by making the creation process straightforward, they have limitations when it comes to navigating past records. Specifically, CUIs require numerous interactions, and users must sift through unnecessary information to find desired records, a challenge that intensifies as the volume of information grows. To address these limitations, we propose an automatic hierarchy method. This method, considering the modality characteristics of text-to-image applications, is implemented with two approaches: vision-based (output images) and prompt-based (input text) approaches. To validate the effectiveness of the automatic hierarchy method and assess the impact of these two approaches on users, we conducted a user study with 12 participants. The results indicated that the automatic hierarchy method enables more efficient record retrieval than traditional CUIs, and user preferences between the two approaches varied depending on their work patterns. This study contributes to overcoming the limitations of linear record retrieval in existing CUI systems through the development of an automatic hierarchy method. It also enhances record retrieval accessibility, which is essential for generative AI to function as an effective tool, and suggests future directions for research in this area.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.caeai.2024.100289
Large language models meet user interfaces: The case of provisioning feedback
  • Sep 11, 2024
  • Computers and Education: Artificial Intelligence
  • Stanislav Pozdniakov + 7 more

Incorporating Generative Artificial Intelligence (GenAI), especially Large Language Models (LLMs), into educational settings presents valuable opportunities to boost the efficiency of educators and enrich the learning experiences of students. A significant portion of the current use of LLMs by educators has involved using conversational user interfaces (CUIs), such as chat windows, for functions like generating educational materials or offering feedback to learners. The ability to engage in real-time conversations with LLMs, which can enhance educators' domain knowledge across various subjects, has been of high value. However, it also presents challenges to LLMs' widespread, ethical, and effective adoption. Firstly, educators must have a degree of expertise, including tool familiarity, AI literacy and prompting to effectively use CUIs, which can be a barrier to adoption. Secondly, the open-ended design of CUIs makes them exceptionally powerful, which raises ethical concerns, particularly when used for high-stakes decisions like grading. Additionally, there are risks related to privacy and intellectual property, stemming from the potential unauthorised sharing of sensitive information. Finally, CUIs are designed for short, synchronous interactions and often struggle and hallucinate when given complex, multi-step tasks (e.g., providing individual feedback based on a rubric on a large scale). To address these challenges, we explored the benefits of transitioning away from employing LLMs via CUIs to the creation of applications with user-friendly interfaces that leverage LLMs through API calls. We first propose a framework for pedagogically sound and ethically responsible incorporation of GenAI into educational tools, emphasizing a human-centred design. We then illustrate the application of our framework to the design and implementation of a novel tool called Feedback Copilot, which enables instructors to provide students with personalized qualitative feedback on their assignments in classes of any size. An evaluation involving the generation of feedback from two distinct variations of the Feedback Copilot tool, using numerically graded assignments from 338 students, demonstrates the viability and effectiveness of our approach. Our findings have significant implications for GenAI application researchers, educators seeking to leverage accessible GenAI tools, and educational technologists aiming to transcend the limitations of conversational AI interfaces, thereby charting a course for the future of GenAI in education.

  • Research Article
  • Cite Count Icon 1
  • 10.1215/2834703x-11556029
Don't Forget That There Are People in the Data: LLMs in the Context of Human Rights
  • Oct 1, 2024
  • Critical AI
  • Wendy H Wong

Large language models (LLMs), and generative AI generally, raise significant concerns regarding human rights. Their promise in finding insights in patterns of data have to be weighed against potential risks to individuals and societies. The typical perspective, which emphasizes accuracy, capability, or scope of such systems, overlooks the fact that generative AI technologies exploit massive collections of data about human behaviors, thoughts, and ideas. The datafication of human life should be examined through the lens of human rights, specifically with regard to autonomy, dignity, equality, and community. This piece argues that discussions about LLMs and generative AI are inherently linked to data originated from individuals, whose information are embedded in the training data. Data are human rights issues because information about individuals are buried in the data. Technical solutions alone are insufficient to address the human rights distortions produced by LLMs. Policy should focus instead on the fact that data are collected on rights-bearing individuals and groups who have been given very little leeway to discuss the implications of or choose to be in the enterprise of creating generative AI.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 34
  • 10.3390/info15110697
Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review
  • Nov 4, 2024
  • Information
  • Georgios Feretzakis + 3 more

Generative AI, including large language models (LLMs), has transformed the paradigm of data generation and creative content, but this progress raises critical privacy concerns, especially when models are trained on sensitive data. This review provides a comprehensive overview of privacy-preserving techniques aimed at safeguarding data privacy in generative AI, such as differential privacy (DP), federated learning (FL), homomorphic encryption (HE), and secure multi-party computation (SMPC). These techniques mitigate risks like model inversion, data leakage, and membership inference attacks, which are particularly relevant to LLMs. Additionally, the review explores emerging solutions, including privacy-enhancing technologies and post-quantum cryptography, as future directions for enhancing privacy in generative AI systems. Recognizing that achieving absolute privacy is mathematically impossible, the review emphasizes the necessity of aligning technical safeguards with legal and regulatory frameworks to ensure compliance with data protection laws. By discussing the ethical and legal implications of privacy risks in generative AI, the review underscores the need for a balanced approach that considers performance, scalability, and privacy preservation. The findings highlight the need for ongoing research and innovation to develop privacy-preserving techniques that keep pace with the scaling of generative AI, especially in large language models, while adhering to regulatory and ethical standards.

  • Supplementary Content
  • Cite Count Icon 1
  • 10.1007/s12194-025-00968-1
Generative AI and foundation models in medical image
  • Jan 1, 2025
  • Radiological Physics and Technology
  • Masahiro Oda

In recent years, generative AI has attracted significant public attention, and its use has been rapidly expanding across a wide range of domains. From creative tasks such as text summarization, idea generation, and source code generation, to the streamlining of medical support tasks like diagnostic report generation and summarization, AI is now deeply involved in many areas. Today’s breadth of AI applications is clearly distinct from what was seen before generative AI gained widespread recognition. Representative generative AI services include DALL·E 3 (OpenAI, California, USA) and Stable Diffusion (Stability AI, London, England, UK) for image generation, ChatGPT (OpenAI, California, USA), and Gemini (Google, California, USA) for text generation. The rise of generative AI has been influenced by advances in deep learning models and the scaling up of data, models, and computational resources based on the Scaling Laws. Moreover, the emergence of foundation models, which are trained on large-scale datasets and possess general-purpose knowledge applicable to various downstream tasks, is creating a new paradigm in AI development. These shifts brought about by generative AI and foundation models also profoundly impact medical image processing, fundamentally changing the framework for AI development in healthcare. This paper provides an overview of diffusion models used in image generation AI and large language models (LLMs) used in text generation AI, and introduces their applications in medical support. This paper also discusses foundation models, which are gaining attention alongside generative AI, including their construction methods and applications in the medical field. Finally, the paper explores how to develop foundation models and high-performance AI for medical support by fully utilizing national data and computational resources.

  • Research Article
  • 10.30574/wjarr.2025.25.3.0892
Generative AI and large language models: The key to creating intelligent, sustainable, and connected cities of the future
  • Mar 30, 2025
  • World Journal of Advanced Research and Reviews
  • Abdullah Birisowo + 6 more

This review paper explores how Generative AI (GAI) and Large Language Models (LLMs) have the potential to reshape smart cities in the industry 5.0 era. By examining case studies and relevant literature, we analyze the influence of these technologies on industrial operations and urban management. The paper focuses on GAI as a key tool for optimizing industries and enabling predictive maintenance, while demonstrating how experts can leverage LLMs to enhance municipal services and communication with citizens. It also discusses the practical and ethical challenges of implementing these technologies. Additionally, the paper highlights emerging trends, illustrated through real-world examples ranging from factories to city-wide pilot projects, and identifies potential pitfalls. The widespread adoption of GAI faces obstacles such as infrastructure constraints and the lack of specialized knowledge needed for effective implementation. While LLMs open new opportunities for citizen services in smart cities, they also raise concerns about privacy, which this study seeks to address. Finally, the paper suggests future research areas, including the development of new ethical AI frameworks and long-term studies on the societal impacts of these technologies. This paper serves as a starting point for industrial leaders and urban developers to navigate the complexities of integrating GAI and LLMs, balancing technological innovation with ethical considerations.

  • Research Article
  • 10.55041/ijsrem39848
Review Analyzer
  • Dec 15, 2024
  • INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Dr V Siva Nagaraju + 2 more

This project focuses on leveraging generative AI to analyse user reviews from the Play Store for diverse applications. By utilizing advanced large language models (LLMs), the system processes extensive user feedback to identify key trends, sentiments, and actionable insights. The AI analyses reviews to categorize overall user sentiment (positive, negative, or neutral), highlight recurring issues, and identify popular feature requests. The generative AI’s capabilities enable it to provide nuanced suggestions to developers, such as improving app functionality, addressing common complaints, and implementing features aligned with user preferences. Additionally, the system can generate concise summaries of user reactions, offering developers a clear understanding of their app's strengths and areas needing improvement. This data-driven approach enhances app development by prioritizing updates based on real user needs, improving user satisfaction, and fostering higher ratings on the Play Store. The integration of generative AI streamlines the review analysis process, ensuring actionable recommendations for sustained app success. Key Words: Generative AI, Large Language Models (LLMs), sentiment analysis, App improvement, review analysis 1.INTRODUCTION This project utilizes generative AI to analyze Play Store reviews across various applications, providing developers with actionable insights. By leveraging large language models (LLMs), the system processes user feedback to identify sentiment trends, highlight common issues, and suggest improvements. This approach enables developers to understand user reactions effectively and prioritize updates, ultimately enhancing app quality, user satisfaction, and ratings. The integration of AI streamlines review analysis, ensuring data-driven decisions for better app performance.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.