Related Topics
Articles published on Rise Of Artificial Intelligence
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
584 Search results
Sort by Recency
- Research Article
- 10.3390/s26051609
- Mar 4, 2026
- Sensors (Basel, Switzerland)
- Alimul Haque Khan + 2 more
Forest fires are a major concern due to their significant impact on the environment, economy, and wildlife habitats. Efficient early detection systems can significantly mitigate their devastating effects. This paper provides a comprehensive review of forest fire detection (FFD) techniques and traces their evolution from basic lookout-based methods to sophisticated remote sensing technologies, including recent Internet of Things (IoT)- and Unmanned Aerial Vehicle (UAV)-based sensor network systems. Historical methods, characterized primarily by human surveillance and basic electronic sensors, laid the foundation for modern techniques. Recently, there has been a noticeable shift toward ground-based sensors, automated camera systems, aerial surveillance using drones and aircraft, and satellite imaging. Moreover, the rise of Artificial Intelligence (AI), Machine Learning (ML), and the IoT introduces a new era of advanced detection capabilities. These detection systems are being actively deployed in wildfire-prone regions, where early alerts have proven critical in minimizing damage and aiding rapid response. All FFD techniques follow a common path of data collection, pre-processing, data compression, transmission, and post-processing. Providing sufficient power to complete these tasks is also an important area of research. Recent research focuses on image compression techniques, data transmission, the application of ML and AI at edge nodes and servers, and the minimization of energy consumption, among other emerging directions. However, to build a sustainable FFD model, proper sensor deployment is essential. Sensors can be either fixed at specific geographic locations or attached to UAVs. In some cases, a combination of fixed and UAV-mounted sensors may be used. Careful planning of sensor deployment is essential for the success of the model. Moreover, ensuring adequate energy supply for both ground-based and UAV-based sensors is important. Replacing sensor batteries or recharging UAVs in remote areas is highly challenging, particularly in the absence of an operator. Hence, future FFD systems must prioritize not only detection accuracy but also long-term energy autonomy and strategic sensor placement. Integrating renewable energy sources, optimizing data processing, and ensuring minimal human intervention will be key to developing truly sustainable and scalable solutions. This review aims to guide researchers and developers in designing next-generation FFD systems aligned with practical field demands and environmental resilience.
- Research Article
- 10.33735/phimisci.2026.12094
- Feb 27, 2026
- Philosophy and the Mind Sciences
- Nina Poth + 1 more
The rise of artificial intelligence (AI) raises the question of whether we should introduce a new category of representations, next to mental and scientific representations. We argue that AI ‘representations’, in particular of deep neural networks, differ significantly from the mental and scientific representations central to the philosophy of (cognitive) science. These systems lack essential aspects, such as semantic content, the ability to misrepresent, and a clear use condition guiding behavior; it is often unclear why and what they represent. They also lack the capacity to form or identify misrepresentations which makes it impossible to assess their accuracy. Furthermore, AI systems do not satisfy a use condition in the same way as mental and scientific representations. We conclude that, while AI systems can, under certain conditions, be useful tools for scientific discovery, their internal states should not be mistaken for mental and scientific representations.
- Research Article
- 10.1038/s41533-026-00487-5
- Feb 24, 2026
- NPJ primary care respiratory medicine
- Joan B Soriano + 1 more
Artificial intelligence (AI) is rapidly advancing respiratory disease management, from diagnosis to population lung health. This scoping review synthesizes the most promising uses of AI in respiratory medicine, with a particular focus on pulmonologists and family physicians interested in lung health. In diagnostics, deep-learning systems streamline chest-imaging workflows by triaging radiographs, detecting COVID-19 pneumonia, and classifying lung nodules on CT. In pulmonary function testing, algorithms detect technical errors and classify spirometric patterns, some claiming to outperforming pulmonologists. Acoustic analysis of cough, breathing, and speech captured on smartphones or wearables offers non-invasive decision support. For monitoring and prediction, AI helps shorten weaning from mechanical ventilation and guides closed-loop strategies for acute respiratory distress. In chronic care, connected devices integrated with environmental data help to forecast asthma and COPD exacerbations, while telehealth and predictive models enable earlier, more personalized interventions. Additional gains are emerging in paediatrics, sleep medicine, lung ultrasounds, and public health. Realizing these benefits will require rigorous multicentre validation and real-world evidence. It will also require proactive bias detection and mitigation with inclusive sampling and equity audits. High-quality, interoperable data and explainable models are needed to enable human oversight. Practical issues such as digital literacy, device access, and usability for children, older adults, and other vulnerable populations also matter for applications requiring patient interaction. With sustained collaboration among clinicians, engineers, AI experts, industry, regulators, and scientific societies, AI can increase the time invested in a satisfactory clinician-patient relationship. With all likelihood, AI can also measurably improve efficiency and accuracy across multiple domains of respiratory care.
- Research Article
- 10.2196/80268
- Feb 23, 2026
- JMIR cancer
- Arthur Claessens + 8 more
Screening for clinical trials is challenging for clinicians due to its time-consuming and repetitive nature. The rise of artificial intelligence (AI) offers an opportunity to improve screening productivity and reproducibility. Pancreatic cancer is characterized by increasing incidence, poor survival outcomes, and an urgent need for improved management strategies. This study aimed to assess the performance of AI in evaluating clinical trial inclusion and exclusion criteria, compared to a double-blind human gold standard, using a retrospective cohort. In the PANCR-AI (Pancreatic Cancer Retrospective Screening with Artificial Intelligence) pilot study, we retrospectively reviewed cases from our institutional database of patients with advanced pancreatic cancer presented at tumor board meetings between January 2018 and December 2023. Each patient was screened for clinical trials open for inclusion at the time of the multidisciplinary meeting. Manual screening of eligibility criteria for each patient-trial pair was performed by 2 blinded oncologists to determine potential eligibility (gold standard), with a third oncologist resolving discrepancies. Potential eligibility was also assessed using 3 large language models (ie, GPT-4.5, Claude 3.7 Sonnet, and Mistral-7B-Instruct v0.3). Their performance was compared to the human gold standard using standard evaluation metrics (eg, sensitivity, specificity, precision, recall, and F1-score). Correlations between the risk of failure and the number of words and characters in the criteria were analyzed. The time required to complete the screening was recorded for both human and AI assessments. The number of trials open for enrollment at the time of the tumor board meeting was also recorded as a variable for analysis. Across 341 patient-trial pairs, the AI models demonstrated high sensitivity, ranging from 83.3% to 92.2%. Analysis of the criteria showed a correlation between the risk of failure and the number of words and the number of characters in the criteria. Overall screening time for manual assessment was significantly longer for the human gold standard (44.70 hours) assessment than for AI (2.53-3.15 hours). Patients were more likely to have been included in a clinical trial if the number of trials open for enrollment was higher at the time of the tumor board meeting (P=.02). Our study highlights the promising performance of AI in clinical trial screening. Future work should explore integration with structured clinical data, such as laboratory values or radiological findings, to improve multimodal comprehension. Expanding the evaluation to a broader range of tumor types and multicenter datasets would improve generalizability. Finally, real-time prospective validation and workflow integration with electronic health records will be critical to assess the feasibility and clinical impact of large language model-assisted screening in daily oncology practice. Addressing these challenges will be essential to move from proof of concept to scalable clinical implementation.
- Research Article
1
- 10.1080/10494820.2026.2615818
- Feb 19, 2026
- Interactive Learning Environments
- Thomas K.F Chiu
ABSTRACT The rise of artificial intelligence (AI) in education, particularly generative AI, challenges the sufficiency of the established Technological Pedagogical Content Knowledge (TPACK) framework. AI’s agentic autonomy, epistemic complexities, and ethical dimensions necessitate an evolved model. This study investigates the newly proposed Human-Centric AI Pedagogy (HCAP) framework, designed to address these gaps by integrating five knowledge domains: AI-Technological, AI-Content, AI-Pedagogical, Human-AI Collaborative, and Ethical Knowledge. The research goal was to define the specific knowledge and skills required within these HCAP domains from a teacher's perspective. Utilizing a three-round Delphi method, a panel of 30 teachers from diverse subjects developed a consensus list of essential competencies. The findings identified and refined 25 critical knowledge items, providing a foundational and empirically grounded model for the HCAP framework. These findings offer a concrete roadmap for teacher education, translating a theoretical model into actionable competencies. This study equips educators to transition from merely using AI to strategically orchestrating human-AI collaborative learning, ensuring they can harness AI's potential ethically, critically, and productively. The HCAP and its suggested knowledge serve as a vital tool for developing future-ready teacher training programs and professional development.
- Research Article
- 10.1055/a-2794-0336
- Feb 19, 2026
- Seminars in neurology
- W Alexander Dalrymple + 1 more
The widespread adoption of virtual residency interviews in response to the COVID-19 pandemic led to an explosion in literature comparing the pros and cons of virtual and in-person interviews, but also led to an explosion in already-high residency application and interview volumes. While virtual interviews were substantially cheaper for all involved, there is fear that applicants and programs cannot judge one another as well as during in-person interviews. Likewise, increases in application volumes have made holistic application review more challenging for program directors, but the recent rise in "preference signaling" seems to be an optimal solution to that issue. 2020 also saw increased awareness of systemic inequities in the United States, and medical education and residency recruitment was not immune from scrutiny. Finally, the rise of artificial intelligence could again fundamentally change the resident selection process. It is imperative that the GME community continues to adapt to a changing world.
- Research Article
- 10.1111/padm.70043
- Feb 12, 2026
- Public Administration
- Ruoxuan Liu + 1 more
ABSTRACT The rise of artificial intelligence in public decision‐making is reshaping state legitimacy by shifting administrative discretion from human bureaucracies to algorithmic systems. While research has explored AI accountability and legitimacy deficits, how they are related across different decision contexts remains unclear. Drawing on bureaucratic legitimacy, procedural fairness, and forum drifting theories, this study examines how AI accountability and effectiveness shape legitimacy perceptions, depending on decision outcomes. Using three survey experiments with 1135 participants in China, we find that accountability is most crucial when AI decisions introduce losses to citizens, whereas effectiveness plays a greater role when outcomes are positive to them. Additionally, the interaction effects between AI accountability and effectiveness are also contingent on decision outcomes. These findings advance AI governance research by highlighting the conditions under which algorithmic legitimacy is strengthened or weakened, emphasizing the need for tailored accountability and effectiveness strategies based on decision outcomes.
- Research Article
- 10.61113/impact.v2i1.1242
- Feb 5, 2026
- International Journal of Global Mental Health, Innovation, Policy, Action, Culture & Transformation
- Jiya Sarvpriya
In recent years, the landscape of mental health care has been transformed by the rise of artificial intelligence. One of the most talked-about innovations is AI-based psychotherapy chatbots, which gained popularity due to a global shortage of psychotherapists (WHO, 2020), their easy and low-cost availability, and growing human-robot interactions (HRI) in the digital world. Tools such as Woebot, Wysa, Replika offer users 24/7 support, rooted in Cognitive Behavioural Therapy (CBT), mindfulness and other psychoeducation tools. While research has explored their effectiveness, there are certain cultural implications which remain underexamined. For instance, when an Indian user seeks advice regarding facing family pressure, the standardized AI response often prioritizes individualism and suggests setting boundaries. This is incongruent with the collectivistic vision of Indian society. The paper discusses how most of the algorithmic training of AI chatbots is done in a Western context. It talks about “Monorhythmic Algorithm”: a system which uses a standard logic to respond even for culturally diverse users. In contrast, norms, distress and help-seeking behaviours are culturally polyrhythmic and even the most accurate AI system may misinterpret the underlying meanings and offer advice that is irrelevant or rather harmful in non-Western contexts. While discussing the research gap, the paper further argues that although there have been advancements such as inclusion of customized local languages in platforms like Wysa, there is a need for the existing AI algorithms to incorporate a layer of cultural transparency and culture-sensitive responses that prioritize the users’ lived realities. It concludes by proposing some strategies such as cultural context embedding, cultural transparency prompts, local humans-in-the-loop, collaboration with AI developers and cultural audits that can be included to mitigate cultural barriers and promote culturally attuned mental health care.
- Research Article
- 10.3390/machines14020178
- Feb 4, 2026
- Machines
- Ietezaz Ul Hassan + 3 more
Early gearbox defect detection is imperative for reducing unplanned downtime, ensuring reliability and efficiency, and minimizing maintenance expenses. In recent years, with the rise of Artificial Intelligence (AI) and digital transformation, gearbox defect detection using AI has gained popularity. Machine learning (ML) classifiers are very popular and transform gearbox condition monitoring from manual to automatic monitoring systems. This work proposes a moving window-based method for extracting statistical features from recorded vibration signals from the gearbox. The extracted features were used to train traditional ML classifiers. Moving window sizes of 300, 400, 500, 600, 700, and 800 were applied to extract statistical features from the publicly available benchmark dataset. The six different moving window sizes caused six types of datasets, each one corresponding to the moving window size. The generated datasets were partitioned using the K-fold cross-validation method to train and test ML models. This study explored and evaluated seven prominent ML classifiers: Decision Tree, Random Forest, Support Vector Machine (SVM), Naïve Bayes, K-Nearest Neighbor (KNN), Gradient Boosting Classifier (GBC), and Logistic Regression. The experimental results demonstrated that SVM, Logistic Regression, and GBC can outperform other ML classifiers. The experimental results in terms of accuracy, precision, and recall revealed that the ML classifier’s performance improves as the size of the moving window used for feature extraction increases.
- Research Article
- 10.1016/j.jpsychires.2025.11.039
- Feb 1, 2026
- Journal of psychiatric research
- Francesco Attanasio + 10 more
Psychoeducation is a key intervention in mood disorders. With the rise of artificial intelligence (AI) conversational agents, tools like ChatGPT are increasingly consulted by patients. Yet, empirical data on how AI-generated psychoeducational content is perceived by patients and professionals remain limited. In this cross-sectional study, 30 depressed inpatients submitted five open-ended questions to ChatGPT-4o. Responses were rated by patients using 5-point Likert scales for relevance, comprehensibility, usefulness, empathy, and acceptance. Independent safety checks were applied to all outputs. The same responses were later blindly evaluated, in randomized order, by three psychiatrists and three psychiatric rehabilitation technicians (PRTs). All outputs passed safety review. Patients assigned higher total scores (mean±SD=22.43±2.64) than PRTs (17.63±3.39) and psychiatrists (15.42±2.02). The largest gaps involved empathy and acceptance, whereas relevance, usefulness, and comprehensibility differed less. PRT ratings were intermediate: closer to patients on relevance, comprehensibility, and usefulness, but closer to psychiatrists on empathy and acceptance. Within patients, no associations emerged with age, education, depression severity, or prior psychoeducation. Patients with mood disorders perceived ChatGPT-generated responses as more relevant, comprehensible, useful, accepting, and empathetic than health professionals did. With conversational agents entering psychoeducation, clinicians must develop strategies to critically integrate such tools, ensuring safety and quality while guiding patient use. The challenge is not resisting AI adoption, but framing it within safe, effective, and ethically sound psychoeducational care.
- Research Article
- 10.1215/10539867-12110872
- Feb 1, 2026
- Federal Sentencing Reporter
- Terry Skolnik
Abstract Postconviction review is an increasingly salient issue, especially during transitions between presidential administrations. Despite statements that he would not do so, President Joe Biden pardoned his son, Hunter Biden, for firearms and tax-related offenses. Weeks later, President Donald Trump pardoned individuals who were convicted of crimes associated with the January 6, 2021, events on Capitol Hill. Each administration’s postconviction relief decisions were critiqued on similar grounds, namely, that they were unprincipled, self-interested, or partisan. These examples highlight the importance of fairness, justice, and coherence associated with postconviction review and postconviction relief. Interestingly, the recent transition between the Biden and Trump administrations occurred during a unique period that was characterized by two other major developments: the rise of artificial intelligence and a renewed emphasis on government efficiency. These two developments may catalyze significant reforms to the postconviction review process to counteract a specific type of government waste and abuse: excessive prison sentences. Drawing on the insights of public administration scholarship, this article argues that artificial intelligence may help improve postconviction review and postconviction relief in certain respects. It argues that artificial intelligence can be used to identify eligible persons for second-look resentencing and clemency, facilitate applications for postconviction review, and streamline the evaluation of postconviction review claims. It demonstrates how artificially intelligent clemency may improve efficiency, fairness, and access to justice. It also highlights important barriers that limit AI’s potential and effectiveness in postconviction review contexts.
- Research Article
- 10.30574/wjarr.2026.29.1.0110
- Jan 31, 2026
- World Journal of Advanced Research and Reviews
- Kabir Oyewale + 1 more
Institutional investors have become the primary owners of public equities, fundamentally transforming corporate governance and market dynamics. This paper explores how the rise of artificial intelligence (AI) in investment management introduces new systemic risks and challenges traditional fiduciary duties. We define “algorithmic stewardship” as the governance of AI-driven decision-making within fiduciary institutions. Our framework connects investor constraints, AI decision rules, and market outcomes, highlighting that while AI can enhance efficiency and risk management, it may also synchronize behavior, amplify procyclical feedback loops, and obscure accountability. The paper discusses implications for regulators, suggesting the need for interaction-based oversight and AI-aware stress tests, as well as responsibilities for institutional investors. We conclude with future research directions on accounting disclosure and assurance in an AI-driven financial ecosystem.
- Research Article
- 10.12681/gbruno.44291
- Jan 29, 2026
- Giordano Bruno
- Angeliki Antoniou
Was Da Vinci a great artist, or a great scientist and engineer? Why did he combine two domains, art and science, that today feel separate and completely unrelated? Was it due to a lack of scientific advancement that allowed him the space to explore science alongside the arts, or was it an internal drive that made him believe the two were inseparable? This article envisions the future of science by considering elements such as the prevalence of technology, the rise of artificial intelligence, and the evolving role of humans in the years to come. It is argued that the future of science depends on its integration with the arts, a synergy that fosters innovation and creativity while enabling individuals to achieve a more holistic intellectual development.
- Research Article
- 10.1186/s40580-026-00533-5
- Jan 28, 2026
- Nano convergence
- Joon Hwang + 4 more
The need for processing complex and temporal datasets has increased with the rise of artificial intelligence. In this context, reservoir computing, which utilizes the short-term memory of the reservoir to map input data into a high-dimensional space, has gathered significant interest. In this study, for the first time, fully CMOS-compatible reservoir computing is demonstrated through gate insulator stack engineering. Integrated on a single wafer, CMOS circuits, Al2O3/Si3N4 (A/N) devices for both reservoir and leaky integrate-and-fire neuron applications, and Al2O3/Si3N4/SiO2 (A/N/O) devices as synaptic devices are verified. Furthermore, the influence of various bias conditions on reservoir performance is analyzed. The proposed co-integrated reservoir computing system efficiently handles temporal data, reducing ~ 53% of network resources with only ~ 0.17%p accuracy drop while being robust to device variations.
- Research Article
- 10.4081/dr.2026.10568
- Jan 21, 2026
- Dermatology reports
- Gianluca Pistore + 8 more
Dear Editor, In recent years, many individuals have turned to the Internet for health-related information, a phenomenon commonly referred to as "Dr. Google". While accessible, this practice often exposes users to unverified content, potentially leading to confusion and anxiety. With the rise of artificial intelligence (AI), the landscape is shifting: tools like ChatGPT offer structured, conversational responses. But how reliable are these answers, especially in the medical field? [...].
- Research Article
- 10.65461/tanmyia.2026.2.1
- Jan 20, 2026
- TANMYIA JOURNAL FOR SCIENCES AND KNOWLEDGE
- Abdulwahab Amer
This study examines Malay Islamic knowledge traditions in Indonesia, Malaysia, and Brunei as active political entities and deeply rooted civilizational reservoirs shaped through a long historical interaction between Islam and the societies of the Malay world. This interaction began with the early arrival of Islam through Yemeni merchants and preachers who carried the scholarly legacy of Shāfiʿī jurisprudence and Sunni Sufism. The research explores the concept of Malay Islamic knowledge and investigates how it responds to the profound transformations brought about by artificial intelligence, as an intellectual product reflecting the unique experience of Islamic communities in the Malay world. The study aims to define Malay Islamic knowledge and analyze the challenges and opportunities imposed by the rise of artificial intelligence on its structure and functions. It proposes a Maqāṣid-based framework for renewing this knowledge and ensuring its continuity by integrating research induction with linguistic and historical analysis, grounded in scriptural texts and the principles of Islamic law. Artificial intelligence is viewed not as a threat, but as a tool that can be harnessed to serve Islamic knowledge in a manner compatible with contemporary needs while preserving Islamic foundations and values. The research presents a Maqāṣid-oriented approach capable of accommodating digital transformations, relying on higher objectives of Sharia and legal maxims to guide renewal and regulate engagement with modern technologies. It concludes that Malay Islamic knowledge—with its civilizational uniqueness, historical depth, and well-established Sharīʿah objectives—is well-positioned to provide a knowledge model capable of contributing to the renewal and development of Islamic thought, and guiding the interaction with technological changes. This is achieved through a civilizational renewal project that safeguards foundational constants, directs evolving variables, and employs artificial intelligence in a conscious, Maqāṣid-based manner, enabling this knowledge to play an effective role in shaping Islamic civilization, preserving identity, and confronting cultural alienation.
- Research Article
- 10.3390/jmse14020203
- Jan 19, 2026
- Journal of Marine Science and Engineering
- Wenhui Xiong + 3 more
In marine hydrodynamics, the core of the boundary element method (BEM) lies in the numerical calculation of the free-surface Green’s function. With the rise of artificial intelligence, using neural networks to fit Green’s function has become a new trend, yet most existing studies are confined to fitting Green’s function in infinite water depth. In this paper, a neural network fitting method for a finite-depth Green’s function is proposed. The classical Multilayer Perceptron (MLP) network and the emerging Kolmogorov–Arnold Network (KAN) are employed to conduct global and partition-based fitting experiments. Experiments indicate that the partition-based KAN fitting model achieves higher fitting accuracy, with most regions reaching 4D fitting precision. For large-scale data input, the average time for the model to calculate a single Green’s function value is 0.0868 microseconds, which is significantly faster than the 0.1120 s required by the traditional numerical integration method. These results demonstrate that the KAN can serve as an accurate and efficient model for finite-depth Green’s functions. The proposed KAN-based fitting method not only reduces the computational cost of numerical evaluation of Green’s functions but also maintains high prediction precision, providing an alternative approach to accelerate BEM calculations for floating body hydrodynamic analysis.
- Research Article
- 10.65461/tanmyia.2026.2.1.1
- Jan 18, 2026
- TANMYIA JOURNAL FOR SCIENCES AND KNOWLEDGE
- Abdulwahab Amer
This study examines Malay Islamic knowledge traditions in Indonesia, Malaysia, and Brunei as active political entities and deeply rooted civilizational reservoirs shaped through a long historical interaction between Islam and the societies of the Malay world. This interaction began with the early arrival of Islam through Yemeni merchants and preachers who carried the scholarly legacy of Shāfiʿī jurisprudence and Sunni Sufism. The research explores the concept of Malay Islamic knowledge and investigates how it responds to the profound transformations brought about by artificial intelligence, as an intellectual product reflecting the unique experience of Islamic communities in the Malay world. The study aims to define Malay Islamic knowledge and analyze the challenges and opportunities imposed by the rise of artificial intelligence on its structure and functions. It proposes a Maqāṣid-based framework for renewing this knowledge and ensuring its continuity by integrating research induction with linguistic and historical analysis, grounded in scriptural texts and the principles of Islamic law. Artificial intelligence is viewed not as a threat, but as a tool that can be harnessed to serve Islamic knowledge in a manner compatible with contemporary needs while preserving Islamic foundations and values. The research presents a Maqāṣid-oriented approach capable of accommodating digital transformations, relying on higher objectives of Sharia and legal maxims to guide renewal and regulate engagement with modern technologies. It concludes that Malay Islamic knowledge—with its civilizational uniqueness, historical depth, and well-established Sharīʿah objectives—is well-positioned to provide a knowledge model capable of contributing to the renewal and development of Islamic thought, and guiding the interaction with technological changes. This is achieved through a civilizational renewal project that safeguards foundational constants, directs evolving variables, and employs artificial intelligence in a conscious, Maqāṣid-based manner, enabling this knowledge to play an effective role in shaping Islamic civilization, preserving identity, and confronting cultural alienation.
- Research Article
- 10.71052/grb2025/idbr5800
- Jan 15, 2026
- Global Education Bulletin
- Yiru Zang
In an era where digital technologies increasingly mediate children’s daily experiences, the rise of artificial intelligence (AI)-generated ecological simulations raises critical questions about how young learners perceive, embody, and understand the natural world. While virtual environments offer accessible alternatives to outdoor learning, they may also restructure children’s sensory engagement and ecological awareness in ways that remain insufficiently examined. Grounded in Merleau-Ponty’s phenomenology of embodiment and Gibson’s theory of affordances, this study investigates how primary-school children experience ecological learning differently in real natural environments compared to AI-simulated ecological spaces. Using a qualitative phenomenological design, the research involved participatory observation, children’s motion-path tracking, semi-structured interviews, and video-elicitation sessions, conducted across two contrasting learning contexts: a tropical rainforest field site and an AI-generated ecological installation. The findings reveal that natural environments elicit expansive bodily engagement, multi-sensory activation, spontaneous exploration, and heightened affective attunement to living elements. These features are strongly associated with ecological consciousness and embodied learning, underscoring the unique role of real natural settings in fostering meaningful ecological understanding. In contrast, AI-generated environments, while visually immersive, tend to produce more task-driven, visually dominant, and sensorily limited behaviors, with reduced affordance variability and diminished perception of environmental vitality. The study argues that artificial intelligence (AI) simulations cannot replace nature’s dynamic affordances and phenomenological depth but can serve as complementary tools when carefully designed to enhance uncertainty, interactivity, and bodily agency. These insights contribute to growing debates on AI in education by highlighting the irreplaceable role of embodied encounters with real ecosystems and offering a framework for integrating technology with environmental pedagogy.
- Research Article
- 10.2174/0113816128354055250312014359
- Jan 1, 2026
- Current pharmaceutical design
- Marzieh Neykhonji + 4 more
Endometriosis, a prevalent women's health condition, is associated with persistent pelvic pain and infertility. Despite ongoing research, its precise disease mechanism remains elusive, impeding the discovery of a definitive cure. However, the progression of this disease is driven by three central factors, namely estrogen, progesterone, and inflammatory processes. The current work summarizes an evaluation of hormonal drug therapy in endometriosis, highlighting pathogenesis, clinical studies, and the anticipated role of AI in improving diagnostic accuracy and therapeutic results. Current information related to endometriosis and the application of AI in its diagnosis and treatment were evaluated through an in-depth literature search in the PubMed database and Google Scholar search engine. The current treatment modalities for this disease encompass drug therapy and surgery. In line with key contributing factors, the first-line pharmaceutical treatment revolves around progestin therapy, which involves administration either alone or in combination with a small amount of estrogen. Each medication is linked to certain drawbacks, encompassing bone loss associated with progesterone-only therapy, considerable cost implications, and heightened risks of bleeding, spotting, and drug intolerance when utilizing combined progesterone-estrogen therapy. Many clinical studies on endometriosis are currently investigating the overall impact of the therapeutic approach involving progesterone-estrogen therapy with respect to the treatment of pelvic pain, health-related quality of life, cost-effectiveness, and tolerability. The rise of artificial intelligence and its advanced data processing capabilities present a promising opportunity to revolutionize endometriosis diagnosis and treatment by offering novel approaches.