“I'm treating it kind of like a diary”: Characterizing How Users with Disabilities Use AI Chatbots

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Marginalized and underrepresented groups, including the disability community, are at risk of being harmed by bias and underserved by emerging technologies, such as Large Language Models (LLMs) and their downstream applications. While previous work has identified types of harm LLM technologies imposed on the disability community, there is a gap in research understanding the use cases and needs of people with disabilities when interacting with such technologies. To fill this knowledge gap, our study seeks to investigate real-life interactions between people with disabilities and LLM-based chatbots. Grounded in the Disability Justice Principle of following the leadership of the “the most impacted,” we interviewed 30 people with disabilities to learn about their current uses of chatbots. We analyzed 18 interviews and characterized distinct use cases, such as navigating social interactions, revising written work, and playing games, into nine broader categories. Our discussion contemplates how the presented chatbot use cases can serve as a foundation for further disability representation work with LLMs.

Similar Papers
  • Research Article
  • Cite Count Icon 601
  • 10.1016/j.future.2018.01.055
In the shades of the uncanny valley: An experimental study of human–chatbot interaction
  • Feb 6, 2018
  • Future Generation Computer Systems
  • Leon Ciechanowski + 3 more

In the shades of the uncanny valley: An experimental study of human–chatbot interaction

  • Research Article
  • Cite Count Icon 8
  • 10.5539/ies.v16n5p19
Intelligent Educational Recommendation Platform with AI Chatbots
  • Sep 24, 2023
  • International Education Studies
  • Thanarat Kingchang + 2 more

The objectives of this research were as follows. 1) Analyze the intelligent educational recommendation platform with AI Chatbots. 2) Design the architecture of the intelligent educational recommendation platform with AI Chatbots. 3) Develop the architecture of the intelligent educational recommendation platform with AI Chatbots. 4) Study the appropriateness of developing the intelligent educational recommendation platform with AI Chatbots. The sample used in the research was seven experts in information system development from various institutions in higher education. The architecture of the intelligent educational recommendation platform with AI Chatbots there is two main components: 1) Stakeholders consisting of system administrators and external users, and 2) The working process of the intelligent educational recommendation platform with AI Chatbots consists of four parts including natural language processing, dialog management, database and application programming interface (API), and response generation. Assessment of the appropriateness of the architecture of the intelligent educational recommendation platform with AI Chatbots found that 1) the architecture of the intelligent educational recommendation platform with AI Chatbots, overall at a high appropriated, 2) the architecture of the intelligent educational recommendation platform with AI Chatbots, an individual element at a high appropriated, and 3) the architecture of the intelligent educational recommendation platform with AI Chatbots, Integrated elements at a high appropriated. As described earlier, the architecture of the intelligent educational recommendation platform with AI Chatbots can be a guideline for developing with AI Chatbots in the future.

  • Research Article
  • 10.1007/s11606-025-10145-0
"I Double Checked It with My Own Knowledge:" Physician Perspectives on the Use of AI Chatbots for Clinical Decision-Making.
  • Jan 21, 2026
  • Journal of general internal medicine
  • Hannah Kerman + 11 more

AI chatbots are proliferating in healthcare systems. It is essential to explore how physicians use these tools in order to understand their influence on clinical care and outcomes. Our goal was to understand how physicians conceive of and incorporate AI into clinical decision-making. We conducted semistructured interviews with generalist physicians from inpatient and outpatient settings in the USA. Prior to the interview, participants were asked to use an AI chatbot, ChatGPT-4, to complete three mock clinical cases. Physicians were interviewed regarding their perspectives on the AI chatbot. Interviews were analyzed using reflexive thematic analysis and conducted via video conference meeting, where they were recorded and transcribed. We interviewed 22 physicians with 2-32years of experience (median = 3years). We identified a central organizing concept of "physician as filter" defining how physicians used the AI chatbot. This idea was composed of four themes. Theme 1: Physicians perceive clinical decision-making as a problem-solving activity, applying internally held knowledge to externally gathered information. Theme 2: AI chatbot systems are part of a continuum of information resources. Theme 3: Trust in the AI chatbot's outputs depends on the user's own clinical knowledge. Theme 4: Clinical decision-making is understood as the personalization of clinical knowledge and context. AI chatbots may help physicians with formulating a clinical problem and generating a hypothesis by expanding their repertoire of possible cases. Despite the "wealth of information" provided by AI chatbots, physician trust in the outputs is limited, especially when AI chatbots do not provide references. Physician users described filtering chatbot outputs, using their own clinical knowledge and experience, to determine what information is relevant. In describing how providers perceive AI chatbots, we hope to guide further investigation of physician AI interaction and chatbot development that facilitates improved clinical reasoning.

  • Research Article
  • Cite Count Icon 2
  • 10.46328/ijte.1035
A Bibliometric Analysis and Systematic Review in AI Chatbots in Language Teaching and Learning
  • Apr 30, 2025
  • International Journal of Technology in Education
  • Hui Wen Chua + 1 more

The role of AI chatbots is undergoing a transformation, where it was firstly used for English native language learning; later, it shifted to the use for learning English as a second language (ESL) and English as a foreign language learning. Lastly, it is used to learn foreign languages. Hence, due to the changes in AI chatbots’ role, there is a need for a study to analyse the development of AI chatbots over the years between 2006 and 2024 and their influence on language education. Therefore, bibliometric analysis and systematic analysis study aim to determine the state-of-art topics related to using AI chatbots in language teaching and learning and how different AI chatbots influence the teachers’ and students’ perspectives on language teaching and learning and students’ learning outcomes. The research is concluded as follows: (1) extend the studies toward students/teachers from various regions, language proficiency levels, and communities with different cultural backgrounds, (2) longitudinal research could be employed to see if there is any novelty effect or other changes in the learning outcomes, affective gains and factors influence the use of the AI chatbots over an extended period, (3) focus on developing strategies, language learning model and process, teaching approaches or methods, assistance from teachers and peers and guidelines for integrating AI chatbots, especially with LLMs AI chatbots into curriculum effectively, (4) effects of learning with self-developed AI chatbots or LLMs AI chatbots that are integrated with more intelligence, realistic agents capable of performing several expression, gestures and movements or more additional games, quizzes, and more multimedia elements in enhancing language learning, (5) factors influence teachers and students in acceptance the use of AI chatbots.

  • Research Article
  • 10.5539/ies.v18n3p70
Architecture of the Service Platform via Artificial Intelligence Chatbots to Promote Students’ Digital Competency
  • May 25, 2025
  • International Education Studies
  • Nopparat Klayklueng + 1 more

The architecture of the service platform via AI chatbots is a research tool which was fabricated to specifically promote digital competency through the use of AI chatbots. The architecture of the service platform herein was initiated by the application of artificial intelligence technology integrated with chatbot technology to create the user interface that can interact with users through the use of languages; thereby, this service platform shall analyze questions or keywords from users and then respond with optimal answers. The objectives of this research are (1) to synthesize the conceptual framework of the architecture of the service platform via AI chatbots, (2) to develop the architecture of the service platform via AI chatbots, and (3) to study the results after development the architecture of the service platform via AI chatbots. The research instruments consist of (1) the architecture of the service platform via AI chatbots, and (2) the evaluation form on the suitability of the architecture of the service platform via AI chatbots. The results of this research show that the suitability of the architecture of the service platform via AI chatbots is at a highest level. However, this study is considered merely a pilot study, which is intended primarily to study the concepts and the feasibility to devise prototype architecture of the service platform via AI chatbots before using it as a guideline to further develop other service platforms via AI chatbots, which can be put in practical use indeed in the future.

  • Research Article
  • Cite Count Icon 11
  • 10.1108/oir-06-2024-0375
“Talk to me, I’m secure”: investigating information disclosure to AI chatbots in the context of privacy calculus
  • Feb 26, 2025
  • Online Information Review
  • Xiaoxiao Meng + 1 more

Purpose This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from previous research that primarily viewed AI chatbots from a non-anthropomorphic approach, this paper contends that AI chatbots are taking on an emotional component for humans. This study thus explores this topic by considering both rational and non-rational perspectives, thereby providing a more comprehensive understanding of user behavior in digital environments. Design/methodology/approach Employing a questionnaire survey (N = 480), this research focuses on young users who regularly engage with AI chatbots. Drawing upon the parasocial interaction theory and privacy calculus theory, the study elucidates the mechanisms governing users’ willingness to disclose information. Findings Findings show that cognitive, emotional and behavioral dimensions all positively influence perceived benefits of using ChatGPT, which in turn enhances privacy disclosure. While cognitive, emotional and behavioral dimensions negatively impact perceived risks, only the emotional and behavioral dimensions significantly affect perceived risk, which in turn negatively influences privacy disclosure. Notably, the cognitive dimension’s lack of significant mediating effect suggests that users’ awareness of privacy risks does not deter disclosure. Instead, emotional factors drive privacy decisions, with users more likely to disclose personal information based on positive experiences and engagement with ChatGPT. This confirms the existence of the privacy paradox. Research limitations/implications This study acknowledges several limitations. While the sample was adequately stratified, the focus was primarily on young users in China. Future research should explore broader demographic groups, including elderly users, to understand how different age groups engage with AI chatbots. Additionally, although the study was conducted within the Chinese context, the findings have broader applicability, highlighting the potential for cross-cultural comparisons. Differences in user attitudes toward AI chatbots may arise due to cultural variations, with East Asian cultures typically exhibiting a more positive attitude toward social AI systems compared to Western cultures. This cultural distinction—rooted in Eastern philosophies such as animism in Shintoism and Buddhism—suggests that East Asians are more likely to anthropomorphize technology, unlike their Western counterparts (Yam et al., 2023; Folk et al., 2023). Practical implications The findings of this study offer valuable insights for developers, policymakers and educators navigating the rapidly evolving landscape of intelligent technologies. First, regarding technology design, the study suggests that AI chatbot developers should not focus solely on functional aspects but also consider emotional and social dimensions in user interactions. By enhancing emotional connection and ensuring transparent privacy communication, developers can significantly improve user experiences (Meng and Dai, 2021). Second, there is a pressing need for comprehensive user education programs. As users tend to prioritize perceived benefits over risks, it is essential to raise awareness about privacy risks while also emphasizing the positive outcomes of responsible information sharing. This can help foster a more informed and balanced approach to user engagement (Vimalkumar et al., 2021). Third, cultural and ethical considerations must be incorporated into AI chatbot design. In collectivist societies like China, users may prioritize emotional satisfaction and societal harmony over privacy concerns (Trepte, 2017; Johnston, 2009). Developers and policymakers should account for these cultural factors when designing AI systems. Furthermore, AI systems should communicate privacy policies clearly to users, addressing potential vulnerabilities and ensuring that users are aware of the extent to which their data may be exposed (Wu et al., 2024). Lastly, as AI chatbots become deeply integrated into daily life, there is a growing need for societal discussions on privacy norms and trust in AI systems. This research prompts a reflection on the evolving relationship between technology and personal privacy, especially in societies where trust is shaped by cultural and emotional factors. Developing frameworks to ensure responsible AI practices while fostering user trust is crucial for the long-term societal integration of AI technologies (Nah et al., 2023). Originality/value The study’s findings not only draw deeper theoretical insights into the role of emotions in generative artificial intelligence (gAI) chatbot engagement, enriching the emotional research orientation and framework concerning chatbots, but they also contribute to the literature on human–computer interaction and technology acceptance within the framework of the privacy calculus theory, providing practical insights for developers, policymakers and educators navigating the evolving landscape of intelligent technologies.

  • Research Article
  • Cite Count Icon 28
  • 10.1016/j.tmrv.2023.150753
Battle of the (Chat)Bots: Comparing Large Language Models to Practice Guidelines for Transfusion-Associated Graft-Versus-Host Disease Prevention.
  • Jul 1, 2023
  • Transfusion Medicine Reviews
  • Laura D Stephens + 3 more

Published guidelines and clinical practices vary when defining indications for irradiation of blood components for the prevention of transfusion-associated graft-versus-host disease (TA-GVHD). This study assessed irradiation indication lists generated by multiple artificial intelligence (AI) programs, or chatbots, and compared them to 2020 British Society for Haematology (BSH) practice guidelines. Four chatbots (ChatGPT-3.5, ChatGPT-4, Bard, and Bing Chat) were prompted to list the indications for irradiation to prevent TA-GVHD. Responses were graded for concordance with BSH guidelines. Chatbot response length, discrepancies, and omissions were noted. Chatbot responses differed, but all were relevant, short in length, generally more concordant than discordant with BSH guidelines, and roughly complete. They lacked several indications listed in BSH guidelines and notably differed in their irradiation eligibility criteria for fetuses and neonates. The chatbots variably listed erroneous indications for TA-GVHD prevention, such as patients receiving blood from a donor who is of a different race or ethnicity. This study demonstrates the potential use of generative AI for transfusion medicine and hematology topics but underscores the risk of chatbot medical misinformation. Further study of risk factors for TA-GVHD, as well as the applications of chatbots in transfusion medicine and hematology, is warranted.

  • Research Article
  • Cite Count Icon 1
  • 10.30748/soi.2020.160.13
Використання чат-бота @es_economy_karkas_bot для онлайн консультації з експертною системою
  • Mar 30, 2020
  • Системи обробки інформації
  • В.П Бурдаєв

Чат-бот (співрозмовник) – це програма, яка імітує людське спілкування на основі елементів штучного інтелекту. У статті представлені результати інтегрування чат-бота @es_economy_karkas_bot з експертною системою для організації консультування в режимі онлайн. Дано опис архітектури і реалізація імплементації чат-бота месенджера телеграм в експертну систему на базі системи “КАРКАС” – інструментальний засіб для побудови моделей баз знань. Розглянута структура і алгоритм взаємодії чат-бота і агентів експертної системи в онлайн режимі. Виконано аналіз можливостей створення чат-ботів в месенджері ТЕЛЕГРАМ, їх інтеграцію з експертами системи в сфері економіки.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.jpurol.2025.08.029
Quality of information on hypospadias from artificial intelligence chatbots: How safe is AI for patient and family information?
  • Dec 1, 2025
  • Journal of pediatric urology
  • Peter Stapleton + 6 more

Quality of information on hypospadias from artificial intelligence chatbots: How safe is AI for patient and family information?

  • Conference Article
  • Cite Count Icon 16
  • 10.1109/iccsea49143.2020.9132917
HR Based Interactive Chat bot (PowerBot)
  • Mar 1, 2020
  • Prof Shabana Tadvi + 2 more

In the age of machine intelligence, Computer Science has had a lot of advancements in the past decade, and Artificial Intelligence definitely stands distinguished among them. A chat bot is a type of application which is generated by a computer. It's capable of having a simulated interaction with the user in such a way that they don't feel like talking to the machine directly. The human-like response during a conversation is initiated by a computer program which is a verbose operator, commonly recognized as a chat bot. For a chat bot to imitate a human dialogue, the input inured by a user should be precisely analyzed and should forge significant and pertinent feedback. Now a day, people interact with systems more than humans. This project is aimed to implement an HR chat bot using tools that expose artificial intelligence methods such as natural language Understanding. Allowing users to interact with the chat bot using natural language input and to train the chat bot using appropriate methods so it will allow to generate a response. This chat bot will allow users to view all details regarding the company all from within the chat bot. As we have seen, for any of the basic requirements, the employee has to go to the team leader or to the HR. To overcome these challenges faced by the employees, we are trying to present a chat bot which would fulfil the requirements such as applying for leave, reimbursement, applying for a allowance and many such. The Chabot will provide personal and efficient communication between the employee and their HR to manage their job and get assistance when needed, such as; answering any queries and requesting for leave. The Chabot will allow users to feel confident and comfortable when using this service regardless of the employee's computer literacy due to the natural language used in messages. It also provides a very accessible and efficient service as all interactions will take place within the one chat conversation negating the employee for the user to navigate through a system.

  • Research Article
  • Cite Count Icon 42
  • 10.2196/57132
Utilization of, Perceptions on, and Intention to Use AI Chatbots Among Medical Students in China: National Cross-Sectional Study
  • Oct 28, 2024
  • JMIR Medical Education
  • Wenjuan Tao + 2 more

BackgroundArtificial intelligence (AI) chatbots are poised to have a profound impact on medical education. Medical students, as early adopters of technology and future health care providers, play a crucial role in shaping the future of health care. However, little is known about the utilization of, perceptions on, and intention to use AI chatbots among medical students in China.ObjectiveThis study aims to explore the utilization of, perceptions on, and intention to use generative AI chatbots among medical students in China, using the Unified Theory of Acceptance and Use of Technology (UTAUT) framework. By conducting a national cross-sectional survey, we sought to identify the key determinants that influence medical students’ acceptance of AI chatbots, thereby providing a basis for enhancing their integration into medical education. Understanding these factors is crucial for educators, policy makers, and technology developers to design and implement effective AI-driven educational tools that align with the needs and expectations of future health care professionals.MethodsA web-based electronic survey questionnaire was developed and distributed via social media to medical students across the country. The UTAUT was used as a theoretical framework to design the questionnaire and analyze the data. The relationship between behavioral intention to use AI chatbots and UTAUT predictors was examined using multivariable regression.ResultsA total of 693 participants were from 57 universities covering 21 provinces or municipalities in China. Only a minority (199/693, 28.72%) reported using AI chatbots for studying, with ChatGPT (129/693, 18.61%) being the most commonly used. Most of the participants used AI chatbots for quickly obtaining medical information and knowledge (631/693, 91.05%) and increasing learning efficiency (594/693, 85.71%). Utilization behavior, social influence, facilitating conditions, perceived risk, and personal innovativeness showed significant positive associations with the behavioral intention to use AI chatbots (all P values were <.05).ConclusionsChinese medical students hold positive perceptions toward and high intentions to use AI chatbots, but there are gaps between intention and actual adoption. This highlights the need for strategies to improve access, training, and support and provide peer usage examples to fully harness the potential benefits of chatbot technology.

  • Research Article
  • Cite Count Icon 7
  • 10.1080/08874417.2025.2456750
Consumers’ Intentions to Use AI Chatbots on Online Shopping Platforms
  • Jan 28, 2025
  • Journal of Computer Information Systems
  • Thuy Dung Pham Thi + 1 more

This study explores the perceptual and psychological factors influencing users’ intentions to use AI chatbots on online shopping platforms. Specifically, it examines the impact of social presence, perceived expertise, and individual differences like introversion on the intentions to use AI chatbots. The study surveyed 213 online shoppers, using SmartPLS 4.0 for path analysis and to test the moderating role of social presence on the relationship between introversion and the intentions to use AI chatbots. The results indicate that introversion, social presence, and perceived expertise are positively associated with the intentions to use AI chatbots. Additionally, perceived expertise is positively associated with social presence, and social presence moderates the relationship between introversion and the intentions to use AI chatbots. These findings offer insights into the psychological factors affecting users’ decisions to engage with AI chatbots, contributing to a deeper understanding of how individual traits and perceptions influence online shopping experiences.

  • Research Article
  • Cite Count Icon 51
  • 10.1089/tmj.2023.0313
Factors Predicting Intentions of Adoption and Continued Use of Artificial Intelligence Chatbots for Mental Health: Examining the Role of UTAUT Model, Stigma, Privacy Concerns, and Artificial Intelligence Hesitancy.
  • Sep 27, 2023
  • Telemedicine journal and e-health : the official journal of the American Telemedicine Association
  • Lin Li + 2 more

Background: Artificial intelligence-based chatbots (AI chatbots) can potentially improve mental health care, yet factors predicting their adoption and continued use are unclear. Methods: We conducted an online survey with a sample of U.S. adults with symptoms of depression and anxiety (N = 393) in 2021 before the release of ChatGPT. We explored factors predicting the adoption and continued use of AI chatbots, including factors of the unified theory of acceptance and use of technology model, stigma, privacy concerns, and AI hesitancy. Results: Results from the regression indicated that for nonusers, performance expectancy, price value, descriptive norm, and psychological distress are positively related to the intention of adopting AI chatbots, while AI hesitancy and effort expectancy are negatively associated with adopting AI chatbots. For those with experience in using AI chatbots for mental health, performance expectancy, price value, descriptive norm, and injunctive norm are positively related to the intention of continuing to use AI chatbots. Conclusions: Understanding the adoption and continued use of AI chatbots among adults with symptoms of depression and anxiety is essential given that there is a widening gap in the supply and demand of care. AI chatbots provide new opportunities for quality care by supporting accessible, affordable, efficient, and personalized care. This study provides insights for developing and deploying AI chatbots such as ChatGPT in the context of mental health care. Findings could be used to design innovative interventions that encourage the adoption and continued use of AI chatbots among people with symptoms of depression and anxiety and who have difficulty accessing care.

  • Research Article
  • 10.1080/17538157.2025.2589195
AI chatbots in the PICU: parental enthusiasm contrasts with socioeconomic usage disparities
  • Nov 26, 2025
  • Informatics for Health and Social Care
  • R Brandon Hunter + 4 more

Parents of children admitted to the PICU face an overwhelming informational landscape, necessitating accessible, patient-specific information. Large Language Models (LLMs) powering AI chatbots offer a promising solution for simplifying complex medical information. We aimed to characterize parental online health information-seeking (OHIS) behaviors and attitudes toward AI chatbots by conducting a cross-sectional survey of 139 English-speaking parents of children admitted to a large academic PICU between April-August 2024. We assessed OHIS behaviors, knowledge of and experience with AI chatbots, and attitudes regarding their potential healthcare utility. Most parents (87%) engaged in OHIS using search engines (86%). Parents with higher income and education sought information more frequently (OR 3.3, 95% CI 1.8–6.2; OR 2.9, 95% CI 1.5–5.7, respectively); those with higher education were less satisfied with online resources (OR 0.5, 95% CI 0.25–0.97). Parents expressed openness toward AI chatbots in healthcare applications (median 4/6). Significant socioeconomic disparities in current AI chatbot use favored male (OR 2.5, 95% CI 1.1–6.0) and higher income (OR 3.8, 95% CI 1.1–12.7) parents. Parents of critically ill children show high OHIS behaviors and positive attitudes toward AI chatbots. Addressing significant socioeconomic disparities in AI chatbot use is crucial for developing equitable implementation strategies in the PICU.

  • Research Article
  • Cite Count Icon 17
  • 10.55908/sdgs.v11i4.794
What Role Does AI Chatbot Perform in the F&amp;B Industry? Perspective from Loyalty and Value Co-Creation: Integrated PLS-SEM and ANN Techniques
  • Aug 24, 2023
  • Journal of Law and Sustainable Development
  • Binh Hai Thi Nguyen + 3 more

Purpose: This study examines the process formation of customer loyalty and customer value co-creation towards AI chatbots by exploring the successive effects of perceived value aspects, perceived information quality, technological self-efficacy for online trust, aspects of loyalty, and value co-creation. Theorical framework: The increasingly strong reception of humans for a new wave of digitalization has promoted the need to learn about customer loyalty and customers' value co-creation formation for businesses applying AI chatbots to their operations business to attract and retain customers. The study utilized the perceived value dimension, as well as perceived information quality, technological self-efficacy, and online trust, to comprehend loyalty and value co-creation. Design/methodology/approach: The study was conducted using a self-administered questionnaire survey with 447 participants, who had used Pizza Hut's AI chatbot service in Vietnam. The data was analyzed by integrating two techniques: partial least square structural equation modeling (PLS-SEM) and artificial neural networks (ANN). Findings: The results show that aspects of perceived value, perceived information quality, and technological self-efficacy all have a significant impact on online trust except hedonic value, which in turn leads to the formation of aspects of loyalty and high ability to create value co-creation. The analysis results show that perceived information quality has a stronger impact on online trust than technological self-efficacy. In addition, the non-linear results from the ANN analysis show that attitudinal loyalty has relatively stronger importance for value co-creation than behavioral loyalty. Research, Practical &amp; Social Implication: This study contributes to the emerging literature on the use of AI chatbots by investigating the possibility of consumers and providers co-creating value. Second, in this study, the authors delved into the internal aspects of loyalty and separated it into two primary aspects, behavioral and attitudinal, in order to clarify their impact on the factors that influence AI chatbot and value co-creation. In conclusion, this research contributes to the existing body of knowledge by providing a more multidimensional perspective on theories. Originality/value: The integration of PLS-SEM and ANN techniques into the analysis to simultaneously explore both linear and non-linear mechanisms of this study explained the influence of aspects of perceived value, perceived information quality, and technological self-efficacy on aspects of loyalty and value co-creation via online trust in AI chatbots context. In addition, this study extends the perceived value to explore the impact of internal and external personal factors on AI chatbots.

Save Icon
Up Arrow
Open/Close
Notes

Save Important notes in documents

Highlight text to save as a note, or write notes directly

You can also access these Documents in Paperpal, our AI writing tool

Powered by our AI Writing Assistant