Optimizing sustainable timber projects and asset management through AI-powered circular economy systems
Purpose The circular economy (CE) model can help the construction sector to meet the UN’s sustainable development goals (SDGs). Although artificial intelligence (AI) has enhanced CE practices in various construction contexts, its integration within timber reuse and recycling remains underexplored. This study proposes a theoretical model of an AI-powered CE system to improve sustainability in timber projects and asset management. Design/methodology/approach A mixed-methods approach was adopted. Ten AI experts from Australian construction firms were interviewed using semi-structured questions to gain qualitative insights into how AI could optimize timber reuse. Of these participants, seven had extensive experience in AI applications for CE purposes in construction, while three were moderately familiar. Subsequently, an online survey collected quantitative data on system requirements from 102 industry professionals. These professionals included project managers, civil engineers and architects, providing broader perspectives on feasibility and adoption factors for AI in timber construction. Findings Analysis revealed 23 AI-driven functions that would facilitate circular design optimization, material management and real-time monitoring of building performance. These functions underscore AI’s potential to reduce timber waste, prolong asset lifespans and streamline project workflows. Originality/value This study advances current knowledge by providing empirical evidence (qualitative and quantitative) on AI-driven circularity in timber construction. The study demonstrates how AI can improve project execution, asset reuse and overall sustainability in the built environment. Practical recommendations are offered to guide the development and implementation of AI-powered CE systems for timber projects.
185
- 10.3390/s19163556
- Aug 15, 2019
- Sensors (Basel, Switzerland)
279
- 10.1016/j.jobe.2021.102704
- May 11, 2021
- Journal of Building Engineering
194
- 10.1016/j.enbuild.2019.109383
- Aug 24, 2019
- Energy and Buildings
21
- 10.1080/17480272.2019.1635205
- Jun 28, 2019
- Wood Material Science & Engineering
4
- 10.1007/s44150-022-00065-6
- Sep 17, 2022
- Architecture, Structures and Construction
506
- 10.1016/j.jobe.2021.103299
- Oct 5, 2021
- Journal of Building Engineering
69
- 10.1016/j.buildenv.2021.108267
- Aug 19, 2021
- Building and Environment
35
- 10.54364/aaiml.2023.1191
- Jan 1, 2023
- Advances in Artificial Intelligence and Machine Learning
44
- 10.1108/rege-07-2021-0121
- May 10, 2022
- Revista de Gestão
483
- 10.1016/j.jclepro.2018.09.244
- Oct 18, 2018
- Journal of Cleaner Production
- Research Article
1
- 10.69864/ijbsam.17-1.159
- Jan 1, 2022
- International Journal of Business Science and Applied Management
Although artificial intelligence (AI) is transforming the workplace structure, very little is known about the strategy that facilitates AI implementation in organizations. The purpose of this paper is to explore key elements in transferring knowledge of the AI implementation process in human resource management (HRM) from the perspective of AI consultants. This study utilizes qualitative data analysis techniques. We first review the literature and then conduct in-depth semistructured interviews with eight AI consultants. We analyze transcripts using the ATLAS.ti software. First, this research reveals that AI implementation is affected by a shortage of employee data, no clear vision, a limited understanding of the AI decisions framework and managers' desire to bypass AI decisions. Second, the combination of an intensive training program and assigning AI specialists is the best way to transfer the knowledge of AI implementation processes to HR managers. Third, HR managers should create communication channels and enhance employees' awareness of the positive impact that AI solutions have on smooth collaboration with AI-employees. The paper also reveals that accelerating the process of implementing AI applications has no negative impact in COVID-19 times. However, an AI bias may be considered a potential threat for AI implementation. This paper attempts to provide a practical understanding of the elements that facilitate AI implementation in the HRM process. It provides vital insights for HR managers and AI developers to benchmark their activities when designing and adopting AI solutions. It also contributes to the literature by responding to the question of how AI implementation should be provided to HR managers and employees.
- Research Article
- 10.1001/jamanetworkopen.2025.17204
- Jul 16, 2025
- JAMA Network Open
Timely disease diagnosis is challenging due to limited clinical availability and growing burdens. Although artificial intelligence (AI) has shown expert-level diagnostic accuracy, a lack of downstream accountability, including workflow integration, external validation, and further development, continues to hinder its clinical adoption. To address gaps in the downstream accountability of medical AI through a case study on age-related macular degeneration (AMD) diagnosis and severity classification. This diagnostic study developed and evaluated an AI-assisted diagnostic and classification workflow for AMD. Four rounds of diagnostic assessments (accuracy and time) were conducted with 24 clinicians from 12 institutions. Each round was randomized and alternated between manual (clinician diagnosis) and manual plus AI (clinician assisted by AI diagnosis), with a 1-month washout period. In total, 2880 AMD risk features were evaluated across 960 images from 240 Age-Related Eye Disease Study patient samples, both with and without AI assistance. For further development, the original DeepSeeNet model was enhanced into the DeepSeeNet+ model using 39 196 additional images from the US population and tested on 3 datasets, including an external set from Singapore. Age-related macular degeneration risk features. The F1 score for accuracy (Wilcoxon rank sum test) and diagnostic time (linear mixed-effects model) were measured, comparing manual vs manual plus AI. For further development, the F1 score (Wilcoxon rank sum test) was again used. Among 240 patients (mean [SD] age, 68.5 [5.0] years; 127 female [53%]), AI assistance significantly improved accuracy for 23 of 24 clinicians, increasing the mean F1 score from 37.71 (95% CI, 27.83-44.17) to 45.52 (95% CI, 39.01-51.61), with some improvements exceeding 50%. Manual diagnosis initially took an estimated 39.8 seconds (95% CI, 34.1-45.6 seconds) per patient, whereas manual plus AI saved 10.3 seconds (95% CI, -15.1 to -5.5 seconds) and remained faster by 6.9 seconds (95% CI, 0.2-13.7 seconds) to 8.6 seconds (95% CI, 1.8-15.3 seconds) in subsequent rounds. However, combining manual and AI did not always yield the highest accuracy or efficiency, underscoring challenges in explainability and trust. The DeepSeeNet+ model performed better in 3 test sets, achieving a significantly higher F1 score than the Singapore cohort (52.43 [95% CI, 44.38-61.00] vs 38.95 [95% CI, 30.50-47.45]). In this diagnostic study, AI assistance was associated with improved accuracy and time efficiency for AMD diagnosis. Further development is essential for enhancing AI generalizability across diverse populations. These findings highlight the need for downstream accountability during early-stage clinical evaluations of medical AI.
- Video Transcripts
- 10.48448/f569-wv75
- Jun 30, 2021
Although Artificial Intelligence (AI) is expected to outperform humans in many domains of decision-making, the process by which AI arrives at its superior decisions is often hidden and too complex for humans to fully grasp. As a result, humans may find it difficult to learn from AI, and accordingly, our knowledge about whether and how humans learn from AI is also limited. In this paper, we aim to expand our understanding by examining human decision-making in the board game Go. Our analysis of 1.3 million move decisions made by professional Go players suggests that people learned to make decisions like AI after they observe reasoning processes of AI, rather than mere actions of AI. Follow-up analyses compared the decision quality of two groups of players: those who had access to AI programs and those who did not. In line with the initial results, decision quality significantly improved for the players with AI access after they gained access to reasoning processes of AI, but not for the players without AI access. Our results demonstrate that humans can learn from AI even in a complex domain where the computation process of AI is also complicated.
- Research Article
- 10.5281/zenodo.5214454
- Jul 13, 2021
- Proceedings of the Annual Meeting of the Cognitive Science Society
Author(s): Shin, Minkyu; Kim, Jin; Kim, Minkyung | Abstract: Although Artificial Intelligence (AI) is expected to outperform humans in many domains of decision-making, the process by which AI arrives at its superior decisions is often hidden and too complex for humans to fully grasp. As a result, humans may find it difficult to learn from AI, and accordingly, our knowledge about whether and how humans learn from AI is also limited. In this paper, we aim to expand our understanding by examining human decision-making in the board game Go. Our analysis of 1.3 million move decisions made by professional Go players suggests that people learned to make decisions like AI after they observe reasoning processes of AI, rather than mere actions of AI. Follow-up analyses compared the decision quality of two groups of players: those who had access to AI programs and those who did not. In line with the initial results, decision quality significantly improved for the players with AI access after they gained access to reasoning processes of AI, but not for the players without AI access. Our results demonstrate that humans can learn from AI even in a complex domain where the computation process of AI is also complicated.
- Research Article
- Jul 24, 2025
- ArXiv
ImportanceTimely disease diagnosis is challenging due to limited clinical availability and growing burdens. Although artificial intelligence (AI) shows expert-level diagnostic accuracy, a lack of downstream accountability—including workflow integration, external validation, and further development— continues to hinder its real-world adoption.ObjectiveTo address gaps in the downstream accountability of medical AI through a case study on age-related macular degeneration (AMD) diagnosis and severity classification.Design, Setting, and ParticipantsWe developed and evaluated an AI-assisted diagnostic and classification workflow for AMD. Four rounds of diagnostic assessments (accuracy and time) were conducted with 24 clinicians from 12 institutions. Each round was randomized and alternated between Manual and Manual + AI, with a washout period. In total, 2,880 AMD risk features were evaluated across 960 images from 240 Age-Related Eye Disease Study patient samples, both with and without AI assistance. For further development, we enhanced the original DeepSeeNet model into DeepSeeNet+ using ~40,000 additional images from the US population and tested it on three datasets, including an external set from Singapore.Main Outcomes and MeasuresWe measured the F1-score for accuracy (Wilcoxon rank-sum test) and diagnostic time (linear mixed-effects model), comparing Manual vs. Manual + AI. For further development, the F1-score (Wilcoxon rank-sum) was again used.ResultsAmong the 240 patients (mean age, 68.5 years; 53% female), AI assistance improved accuracy for 23 of 24 clinicians, increasing the average F1-score by 20% (37.71 to 45.52), with some improvements exceeding 50%. Manual diagnosis initially took an estimated 39.8 seconds per patient, whereas Manual + AI saved 10.3 seconds and remained 1.7–3.3 seconds faster in later rounds. However, combining manual and AI may not always yield the highest accuracy or efficiency, underscoring challenges in explainability and trust. DeepSeeNet+ performed better in three test sets, achieving 13% higher F1-score in the Singapore cohort.Conclusions and RelevanceIn this diagnostic study, AI assistance improved both accuracy and time efficiency for AMD diagnosis. Further development was essential for enhancing AI generalizability across diverse populations. These findings highlight the need for downstream accountability during early-stage clinical evaluations of medical AI. All code and models are publicly available.
- Research Article
1
- 10.1177/20552076251330552
- Apr 1, 2025
- Digital health
Although artificial intelligence (AI) can boost clinical decision-making, personalize patient treatment, and advance the global health sectors, there are unique implementation challenges and considerations in developing countries. The perceptions, attitudes, and behavioral factors among the users are limitedly identified in Ethiopia. This study aimed to explore AI in healthcare from the perspectives of health professionals in a resource-limited setting. We employed a cross-sectional descriptive study including 404 health professionals. Data were collected using a self-structured questionnaire. A simple random sampling technique was applied. We used SPSS to analyze data. Tables and graphs were used to present the findings. A 95.7% response rate was reported. The mean age of the respondents was 32.57 ± 5.34 SD. Almost 254 (62.9%) of the participants were Bachelors of Science degree holders. Nearly 156 (38.6%) of the participants were medical doctors. More than 50% (52.2%) of them said AI would be applicable for diagnosis and treatment purposes in healthcare organizations. This study identified that a favorable attitude, good knowledge, and formal training regarding AI technologies would foster clinical decision-making practices more efficiently and accurately. Similarly, our study also identified the potential barriers to AI technologies in healthcare such as ethical issues, privacy and security of patient data were some to mention. Our study revealed that positive attitude, good knowledge, and formal training are crucial to advance healthcare using AI technologies. In addition, this study identified self-reported AI concerns of the participants such as; privacy and security of data, ethical issues, and accuracy of AI systems. Attention could be given to overcome the barriers of AI systems in the health system. Providing training, allocating time to practice AI tools, incorporating AI courses in the curricula of medical education, and improving knowledge can further the usage of AI systems in healthcare settings.
- Research Article
1
- 10.18178/ijiet.2023.13.12.2005
- Jan 1, 2023
- International Journal of Information and Education Technology
Although Artificial Intelligence (AI) is already being used in a variety of ways to support creativity and education, there are still limitations when it comes to understanding how AI becomes intelligent, its impacts and how to manipulate, tinker with and explore future uses. This work builds on the idea of “syntonicity” as a cognitive tool where learners benefit from their existing understanding of intelligence while learning about AI. This work presents a learning framework called “Neural Syntonicity” which describes the syntonic relationship between the student’s thoughts and reflections while learning how to use and train AI Image Recognition tools. In this project we: 1) developed a series of Machine Learning Image Recognition software tools that students can manipulate and tinker with, 2) developed a “microworld” of activities and learning materials that supports a conducive learning environment for students to learn about Image Recognition, and 3) developed scenarios that allow students to explore their own cognitive labels of visual Image Recognition while using these tools. The research also aims to help students uncover “Powerful Ideas” and learn technical knowledge in Artificial Intelligence like: prediction, data clustering, accuracy, data bias, training and societal impacts. Using a mixed methods approach of Design Based Research, we conducted studies with three different groups of students. Through the analysis, we found that all groups of students gained confidence with using AI, and learned new technical skills in AI. Students were also able to demonstrate through a variety of examples that bias is a factor that can be controlled in AI systems as well as in the human mind.
- Research Article
1
- 10.1007/s10596-024-10317-7
- Sep 2, 2024
- Computational Geosciences
Although Artificial Intelligence (AI) projects are common and desired by many institutions and research teams, there are still relatively few success stories of AI in practical use for the Earth science community. Many AI practitioners in Earth science are trapped in the prototyping stage and their results have not yet been adopted by users. Many scientists are still hesitating to use AI in their research routine. This paper aims to capture the landscape of AI-powered geospatial data sciences by discussing the current and upcoming needs of the Earth and environmental community, such as what practical AI should look like, how to realize practical AI based on the current technical and data restrictions, and the expected outcome of AI projects and their long-term benefits and problems. This paper also discusses unavoidable changes in the near future concerning AI, such as the fast evolution of AI foundation models and AI laws, and how the Earth and environmental community should adapt to these changes. This paper provides an important reference to the geospatial data science community to adjust their research road maps, find best practices, boost the FAIRness (Findable, Accessible, Interoperable, and Reusable) aspects of AI research, and reasonably allocate human and computational resources to increase the practicality and efficiency of Earth AI research.
- Research Article
- 10.14419/negfns98
- Nov 2, 2025
- International Journal of Basic and Applied Sciences
Although artificial intelligence (AI) has become a powerful driver of innovation in marketing, existing research often treats its applications in predictive analytics, customer segmentation, and personalization as fragmented domains. This lack of integration limits a comprehensive understanding of how AI can shape modern marketing strategies. To address this gap, this study conducted a systematic review of 20 peer-reviewed articles published between 2020 and 2025, following PRISMA guidelines. Bibliometric techniques and thematic content analysis were employed to identify intellectual structures, citation patterns, and emerging research themes. The analysis revealed four thematic clusters: (1) AI for personalization and customer relationship management (CRM), (2) predictive analytics and strategic marketing, (3) global and supply chain applications of AI, and (4) bibliometric and conceptual foundations. Keyword and trend mapping highlighted dominant themes such as machine learning and customer behavior, while new areas of interest—including emotion AI, federated learning, and AI ethics—are gaining prominence. This review not only synthesizes dispersed literature but also provides a roadmap for future research, emphasizing explainable AI, adaptive models, ethical governance, and interdisciplinary collaboration to support responsible and innovative AI adoption in marketing.
- Research Article
- 10.1080/09537325.2025.2572043
- Oct 9, 2025
- Technology Analysis & Strategic Management
Amid growing environmental pressures and rapid digitalisation, the circular economy (CE) has become a key strategy, especially in resource-intensive sectors. China’s electronics manufacturing industry, generating over 20% of global e-waste, is central to this transition. Although artificial intelligence (AI) is increasingly promoted as an enabler of CE, its firm-level impacts remain insufficiently examined, especially with regard to nonlinear adoption effects and its interaction with green innovation capability. To address this gap, this study draws upon resource-based and dynamic capability theories to analyse panel data from 34 listed Chinese electronics manufacturers between 2018 and 2022. The results reveal a U-shaped relationship between AI adoption and CE performance, with performance improvements occurring only after surpassing a critical inflection point (AI = 1.8). Contrary to expectations, green innovation capability functions as a negative moderator, postponing this turning point due to structural inertia within established green systems. This moderating effect varies according to firms’ levels of AI maturity. These findings challenge common assumptions about digital-green synergies, highlighting the critical roles of timing, adaptability, and coordination. The study advises firms to adopt phased AI adoption and build dynamic capabilities, while urging policymakers to create tiered incentives and promote industry-academia collaboration.
- Research Article
- 10.1001/jamanetworkopen.2025.32312
- Sep 17, 2025
- JAMA Network Open
Patients using languages other than English are a group at risk of poor health outcomes and encounter barriers to access of translated written materials. Although artificial intelligence (AI) may offer an opportunity to improve access, few studies have evaluated the accuracy and safety of AI translation for clinical care under routine practice conditions. To investigate the accuracy of AI translation compared with professional human translation of patient-specific issued pediatric inpatient discharge instructions. This comparative effectiveness analysis compared translations by a neural machine translation model vs professional translators using patient-specific pediatric inpatient discharge instructions received by families between May 18, 2023, and May 18, 2024, at a single center academic pediatric hospital. Instructions were translated to Simplified Chinese, Somali, Spanish, and Vietnamese by professional translators and the Azure AI system and then broken into scoring sections. Two professional translators per language evaluated translations (blinded to source) on an established 5-point scale for fluency, adequacy, meaning, and error severity, with 1 indicating worst performance and 5 indicating best performance. AI vs professional translation. Quality of discharge instruction translation, including fluency, adequacy, meaning, and severity of errors. A total of 148 sections from 34 discharge instructions were analyzed. When considering all 4 languages together, average fluency, adequacy, and meaning were lower among AI compared with professional human translations. Among all tested languages, mean (SD) fluency for AI translations was 2.98 (1.12) compared with 3.90 (0.96) for professional translations (difference, 0.92; 95% CI, 0.83-1.01; P < .001), adequacy was 3.81 (1.14) compared with 4.56 (0.70) (difference, 0.74; 95% CI, 0.65-0.83; P < .001), meaning was 3.38 (1.15) compared with 4.28 (0.84) (difference, 0.90; 95% CI, 0.80-0.99; P < .001), and error severity was 3.53 (1.28) compared with 4.48 (0.88) (difference, 0.95; 95% CI, 0.85-1.06; P < .001). Compared with professional translations, the Spanish AI translations were noninferior in adequacy (difference, 0.08; 95% CI, -0.02 to 0.19) and error severity (difference, 0.03; 95% CI, -0.09 to 0.14) but inferior in fluency (difference, 0.38; 95% CI, 0.23-0.53) and just crossed the inferiority threshold in meaning (difference, 0.08; 95% CI, -0.04 to 0.20). The Chinese, Vietnamese, and Somali AI translations were inferior to the professional translations across all metrics, with the greatest differences for Somali. In this comparative effectiveness analysis of AI- vs professionally translated issued discharge instructions, AI-translated instructions performed similarly for Spanish but worse for other languages tested. Validation and clinical implementation of AI-based translation will require special attention to languages of lesser diffusion to prevent creating new inequities.
- Book Chapter
1
- 10.70593/978-81-981271-1-2_5
- Oct 13, 2024
Smart and sustainable is the way forward when it comes to industries, and, although artificial intelligence (AI) is the pathway to the transformation, it has its own set of challenges for massive incorporation. First, it costs a lot to build an AI infrastructure, investment would be hard for a lot of organizations, for instance, small and medium-sized enterprises (SMEs), to made coveted in the market. The field of AI is multi-layered, demanding technically sound workforce specializing in data science and machine learning, facilities a scarce resource at the global level. Also, in any industry with sensitive data there are big issues that arise alongside the adoption of AI systems, specifically related to data privacy and security. Ethics is another important issue and without careful handling tradition biased humans through AI can lead to a turbocharged outcome. In addition, operational issues are rampant; integrating AI into legacy systems and operations can be complex and time-consuming. This is not scalable at run-time due to the dynamic nature of AI technologies and results in incrementing operational burden of continuous updates/maintenance. Even so, legal issues accompany AI as it continues to grow in popularity, as the rules that apply to AI are forming and have significant differences depending on the region. Addressing such challenges will demand an integrated system approach, encompassing government ordinance, academic learning and industry exposure to enact a conducive policy environment, educate and train manpower properly and encourage innovation in creating efficient and sustainable AI-based solutions.
- Research Article
1
- 10.2478/fco-2023-0031
- Dec 1, 2023
- Forum of Clinical Oncology
In recent years, the escalating volume of essential information for oncologists has created a challenge, making it arduous to stay abreast of the latest developments in the multifaceted field of cancer care. Although Artificial Intelligence (AI) is increasingly applied in healthcare, particularly for tasks like image recognition and big data analysis, we advocate for an AI-centric public health model tailored to comprehensive cancer care. This model aims to guide patients from their initial doctor’s visit to the conclusion of treatment, thereby minimizing direct doctor involvement. Results. The proposed AI system comprises distinct units: Regional AI (RAI) for patient management and coordination with healthcare specialists and facilities in specific areas, General AI (GAI) to oversee healthcare processes on a broader scale, and Scientific AI (SAI) for data analysis and hypothesis generation, essential for scientific research and clinical trials. To enhance cost efficiency, we suggest introducing an intermediate layer, Teacher AI (TAI), facilitating the development of AI systems like GAI or RAI based on human needs without necessitating extensive specialist intervention. Conclusions. Implementing this model can simplify oncologists’ daily tasks, reduce errors, improve treatment outcomes, and lower the cost of cancer care while maintaining its high quality. The Human–TAI–AI development model can streamline the system’s development and implementation, making it more cost-effective.
- Research Article
9
- 10.1109/mce.2021.3075329
- May 1, 2022
- IEEE Consumer Electronics Magazine
Although artificial intelligence (AI) promises to deliver ever more user-friendly consumer applications, recent mishaps involving fake information and biased treatment serve as vivid reminders of the pitfalls of AI. AI can harbor latent biases and flaws that can cause harm in diverse and unexpected ways. Before AI becomes interwoven into human society, it is important to understand how and when AI can fail. This article presents a timely survey of AI-induced mishaps that relate to consumer applications. The article also offers suggestions on mitigating strategies to manage the undesirable side effects of using AI for consumer applications. It, therefore, serves a dual purpose of creating awareness of current issues and encouraging other researchers in the consumer technology community to build better AI consumer applications.
- Book Chapter
- 10.4018/979-8-3693-7989-9.ch008
- Oct 11, 2024
Stakeholders need to collaborate to create ethical frameworks and policies so that as AI is increasingly incorporated into politics, it strengthens rather than undermines democratic processes and fosters a more diverse and equal political environment. This essay investigates AI's impact on democracy and its application to political advertising, campaigning, and participation. The finding found that AI has become a revolutionary force in politics, greatly improving advertising, campaigning, and voter mobilization. Although artificial intelligence (AI) has enormous potential to increase democratic participation, it also presents serious ethical issues, such as worries about accountability, transparency, and manipulation risk. In order to ensure that AI enhances democratic processes rather than weakens them and promotes a more inclusive and equitable political environment as AI becomes more integrated into politics, stakeholders must work together to develop policies and ethical frameworks.
- New
- Research Article
- 10.1108/bepam-01-2025-0024
- Oct 28, 2025
- Built Environment Project and Asset Management
- New
- Front Matter
- 10.1108/bepam-09-2025-290
- Oct 28, 2025
- Built Environment Project and Asset Management
- Research Article
- 10.1108/bepam-03-2025-0077
- Oct 10, 2025
- Built Environment Project and Asset Management
- Research Article
- 10.1108/bepam-12-2024-0289
- Oct 8, 2025
- Built Environment Project and Asset Management
- Research Article
- 10.1108/bepam-09-2024-0221
- Oct 3, 2025
- Built Environment Project and Asset Management
- Research Article
- 10.1108/bepam-01-2025-0051
- Oct 2, 2025
- Built Environment Project and Asset Management
- Research Article
- 10.1108/bepam-07-2024-0175
- Oct 1, 2025
- Built Environment Project and Asset Management
- Supplementary Content
- 10.1108/bepam-08-2024-0205
- Sep 4, 2025
- Built Environment Project and Asset Management
- Research Article
- 10.1108/bepam-08-2024-0190
- Aug 15, 2025
- Built Environment Project and Asset Management
- Research Article
- 10.1108/bepam-12-2024-0288
- Aug 14, 2025
- Built Environment Project and Asset Management
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.