Articles published on Artificial Machine
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
9323 Search results
Sort by Recency
- New
- Research Article
- 10.3724/sp.j.1123.2025.08003
- Feb 8, 2026
- Se pu = Chinese journal of chromatography
- Yufan Zhang + 6 more
Cardiovascular diseases (CVDs) are among the leading cause of global morbidity and mortality. Due to their high prevalence and often asymptomatic progression, there is a pressing need for diagnostic tools that enable the early, accurate, and accessible detection of them. Acute coronary syndrome (ACS), as a common and severe CVDs with high morbidity and mortality rates, has attracted considerable scientific interest. Various methods have been developed to detect ACS rapidly and accurately. Traditional diagnostic methods relying on antibody-based assays are effective. However, they face significant limitations, including high production costs, poor stability under varying environmental conditions, batch-to-batch variability, and cross-reactivity leading to false positives. These challenges have motivated the search for robust, cost-effective alternatives capable of detecting biomarkers with high sensitivity and specificity. Molecularly imprinted polymers (MIPs) have emerged as a promising alternative solution, offering antibody-like molecular recognition capabilities, superior stability, lower production costs, and resistance to harsh environmental conditions. This review systematically examines the latest advancements in MIP-based sensors for ACS biomarker detection in the last fifteen years, including imprinting strategies for key ACS biomarkers, sensor development and integration, and current challenges along with future perspectives. The first section focuses on the molecular imprinting techniques for essential ACS biomarkers, such as cardiac troponin (cTnI/cTnT), myoglobin (Myo), and creatine kinase (CK). It compares whole-protein imprinting with epitope imprinting, highlighting the advantages of the latter in reducing template costs and enhancing binding specificity. Epitope imprinting using short peptide sequences has demonstrated femtomolar detection limits while overcoming challenges associated with large protein templates, such as structural denaturation and difficult template removal. The review also explores innovative approaches like dummy template imprinting, where structurally similar but cheaper molecules are used to create MIPs for high-cost biomarkers, achieving comparable specificity and sensitivity. The second section discusses the integration of MIPs with advanced biosensing platforms. Electrochemical sensors, using MIP-modified electrodes, have achieved remarkable sensitivity and rapid response times, making them suitable for point-of-care testing (POCT). Optical sensors, particularly those based on surface-enhanced Raman spectroscopy and surface plasmon resonance, enable label-free, real-time detection with ultra-low detection limits. The review also addresses the integration of MIPs with microfluidic technology, where miniaturized devices facilitate automated, high-throughput biomarker analysis. Examples include paper-based microfluidic sensors that combine capillary action with MIP-SERs tags for multiplexed detection, achieving low detection limits without complex instrumentation. Despite these advancements, the review identifies key challenges hindering widespread clinical adoption of the MIP's based ACS sensor. Although the sensitivity and specificity of MIPs are impressive, they still lag behind those of monoclonal antibodies in some applications, particularly for low-abundance biomarkers. Reproducibility issues arise from variations in polymerization conditions and template removal efficiency. Commercialization barriers include the lack of standardized production protocols and regulatory frameworks for MIP-based diagnostics. The review proposes several strategic directions to address these limitations. Computational modeling and machine learning could optimize monomer selection and polymerization conditions to enhance MIP's performance. The development of hybrid systems combining MIPs with nanomaterials may further improve sensitivity and signal transduction. Multidisciplinary collaborations among chemists, engineers, and clinicians will be essential to translate laboratory innovations into commercially viable diagnostic tools. Additionally, the integration of MIPs with artificial intelligence machine learning algorithms could support the development of personalized diagnosis and treatment strategies. These future perspectives are likely to have a significant impact on the early diagnosis and treatment of cardiovascular diseases. In conclusion, MIP-based sensors represent a promising direction in ACS diagnostics, offering a unique combination of affordability, stability, and precision. By addressing current technical and translational challenges, MIP technology has the potential to revolutionize early disease detection, particularly in resource-limited areas. This review not only summarizes a decade of research progress but also provides a plan for future developments that could make personalized, decentralized cardiovascular diagnostics a widespread reality.
- New
- Research Article
- 10.1007/s00228-026-03997-w
- Feb 6, 2026
- European journal of clinical pharmacology
- Mohammadsadra Shamohammadi + 7 more
Warfarin remains one of the most widely used anticoagulants; however, its narrow therapeutic index means that even small dosing deviations can result in thromboembolic or bleeding events, necessitating close monitoring and strict control of the international normalized ratio (INR). Although traditional warfarin dosing algorithms incorporating CYP2C9 and VKORC1 genotypes improve upon fixed-dose regimens, they explain less than 50% of dose variability and perform inconsistently across populations. These limitations underscore the need for more adaptive and precise dosing methodologies. Artificial intelligence (AI) and machine learning (ML) have been recognized as powerful approaches to advance warfarin dose individualization. This narrative review synthesizes literature on machine learning approaches to warfarin dosing, including support vector regression, neural networks, ensemble models, and reinforcement learning, with a focus on predictive performance and clinical relevance. Overall, the literature indicates that ML-based warfarin dosing models may improve prediction of the therapeutic warfarin dose and regulation of INR levels compared with traditional clinical and pharmacogenetic interventions. However, many published models are constrained by small sample sizes and limited external validation, reducing generalizability. Methodological heterogeneity and inconsistent reporting further underscore persistent gaps in the evidence base. AI and ML approaches have shown potential advantages over clinical and pharmacogenetic dosing methods for warfarin, with some studies reporting lower prediction errors and improved therapeutic INR control. However, further studies are needed to draw definitive conclusions about their comparative effectiveness.
- New
- Research Article
- 10.11113/humentech.v5n1.118
- Feb 6, 2026
- Journal of Human Centered Technology
- Muhammad Alif Imran Mohammad Fadzir + 1 more
In today’s dynamic business environment, Information and Communication Technology (ICT) serves as a pivotal force driving innovation, particularly in human resource management. One notable advancement is the emergence of smart resume applications, which are reshaping traditional recruitment practices. These systems utilize advanced technologies such as artificial intelligence (AI), machine learning, and natural language processing (NLP) to facilitate the automation of resume screening and candidate evaluation processes. This paper presents the development of an Automated Resume Parsing designed to reduce the operational load on human resource professionals. By utilizing NLP techniques, the system extracts key candidate information including name, contact details, educational background, and skills from the resume documents. A custom Named Entity Recognition (NER) model is employed to enhance the accuracy and relevance of extracted data. The model was trained using the SpaCy NLP framework to achieve an overall accuracy of 92.4% and an F1-score of 0.90. The extracted data are presented through an interactive web interface for HR personnel, enabling structured and efficient review of applicant information. The results demonstrate that integrating NLP and machine learning in recruitment systems can significantly enhance automation, consistency, and fairness in candidate evaluation processes. The system's effectiveness was demonstrated through successful extraction and structured presentation of applicant information, aligned with organizational hiring criteria. Beyond technical functionality, this study highlights the potential of such human-centered technologies to enhance decision-making, increase recruitment efficiency, and transform the recruiter-applicant interaction by fostering transparency and fairness in the hiring process.
- New
- Research Article
- 10.3390/philosophies11010018
- Feb 6, 2026
- Philosophies
- Boumediene Hamzi
This essay explores the metaphysical and philosophical implications of Artificial Intelligence (AI) and Machine Learning (ML) through the intersecting insights of René Guénon (ʿAbd al-Wāḥid Yaḥiā), Martin Heidegger, and Ibn al-ʿArabī. It argues that modern AI systems, particularly in their statistical and data-centric forms, are not merely instrumental tools but expressions of a deeper metaphysical worldview-one rooted in quantification, abstraction, and utility. Guénon’s critique of the “reign of quantity” and Heidegger’s notion of Enframing (Gestell) converge in diagnosing the loss of qualitative and sacred dimensions in modern life. While Heidegger’s phenomenology provides a powerful immanent critique of technological reductionism from within the Western philosophical tradition, Guénon’s metaphysical traditionalism articulates a diagnosis of modernity that resonates with Islamic metaphysics, especially as articulated by Ibn al-ʿArabī. The essay includes Heidegger in the argument as a representative of a critique of modern technology issuing from the Western tradition itself, and by emphasizing his shared concerns with Guénon, whose metaphysics resonates with Ibn al-ʿArabī’s metaphysics. Through a comparative metaphysical framework, this paper proposes an Islamic response to AI that avoids both technophilia and technophobia, insisting instead on a spiritually grounded ethic of technology that preserves human’s dignity and mission. Methodologically, the essay restores a prior order often inverted in contemporary AI ethics: ontology (what AI is) grounds epistemology (what it can know), and only then can ethical evaluation be coherent.
- New
- Research Article
- 10.3897/ejfa.2026.172240
- Feb 6, 2026
- Emirates Journal of Food and Agriculture
- Duanne Engelbrecht + 3 more
The global poultry industry is a critical sector, tasked with meeting the increasing demand for animal protein. Despite its growth and efficiency, it faces challenges, including enhancing productivity, optimizing resource utilization, and ensuring animal welfare. Addressing these challenges requires innovative solutions to improve both the efficiency and sustainability of poultry production. This paper presents an in-depth analysis of how advancements in artificial intelligence (AI) and machine learning (ML) technologies are being integrated into poultry farming to revolutionize its practices. We explore the application of AI in monitoring systems, smart poultry houses, and automated management practices that significantly enhance production metrics and animal welfare. Our study delves into various AI-driven methods, such as predictive modelling, real-time environmental monitoring, and precision feeding systems. Furthermore, the research identifies the current limitations and future potential of these technologies in facilitating a shift towards more responsive and responsible poultry farming practices. Our findings suggest that embracing AI technologies not only contributes to the economic viability of poultry farms but also aligns with ethical standards and sustainability goals, indicating a promising direction for the future of poultry farming.
- New
- Research Article
- 10.1038/s41746-026-02353-7
- Feb 5, 2026
- NPJ digital medicine
- Hanyang Li + 6 more
Advances in artificial intelligence (AI) and machine learning (ML) have led to a surge in AI/ML-enabled medical devices, posing new challenges for regulators because best practices for developing, testing, and monitoring these devices are still emerging. Consequently, there is a critical need for up-to-date data analyses of the regulatory landscape to inform policy-making. However, such analyses have historically relied upon manual annotation efforts because regulatory documents are unstructured, complex, multi-modal, and filled with jargon. Efforts to automate annotation using simple natural language processing methods have achieved limited success, as they lack the reasoning needed to interpret regulatory materials. Recent progress in large language models (LLMs) presents an unprecedented opportunity to unlock information embedded in regulatory documents. This work conducts the first wide-ranging validation study of LLMs for scaling data analyses in the field of medical device regulatory science. Evaluating LLM outputs using expert manual annotations and "LLM-as-a-judge," we find that LLMs can accurately extract attributes spanning pre- and post-market settings, with accuracy rates often reaching 80% or higher. We then demonstrate how LLMs can scale up analyses in three applications: (1) monitoring device validation practices, (2) coding medical device reports, and (3) identifying potential risk factors for post-market adverse events.
- New
- Research Article
- 10.3390/mti10020018
- Feb 5, 2026
- Multimodal Technologies and Interaction
- Pablo Fernández-Arias + 2 more
The rapid expansion of digital education in the 21st century has positioned Virtual Reality Learning Environments (VRLEs) as promising spaces for fostering greater learner autonomy. As immersive technologies become more accessible and pedagogically versatile, they offer students opportunities to regulate their learning processes, experiment in interactive scenarios, and progress at their own pace. This review examines how autonomous learning has been conceptualized and investigated within VRLE research through a comprehensive bibliometric analysis of studies published between 2000 and 2025. The results reveal a research field shaped by two major orientations: one focused on human and pedagogical dimensions (learner diversity, instructional design, and evidence-based strategies) and another on technological innovation (artificial intelligence, machine learning, and simulation-based systems). Topic analyses show that digital and immersive education dominate current scholarly production, while areas directly related to autonomy, personalized learning, and student-centered methodologies remain comparatively less developed. Accordingly, it is crucial to reinforce pedagogical structures that enable autonomous learning in VR environments and to integrate technological advancements in a manner that translates into tangible improvements in educational quality across different settings.
- New
- Research Article
- 10.18231/j.occ.70716.1770187942
- Feb 4, 2026
- Onco Critical Care
- Shivam Dubey
Current applications of artificial intelligence and machine learning in oncologic ICU management
- New
- Research Article
- 10.1016/j.nbd.2026.107307
- Feb 3, 2026
- Neurobiology of disease
- Shaik Basha + 4 more
Artificial intelligence and machine learning in neurodegenerative disease management: A 21st century paradigm.
- New
- Research Article
- 10.3892/ijo.2025.5828
- Feb 1, 2026
- International journal of oncology
- Yuanshe Huang + 2 more
Prostate cancer remains one of the most prevalent malignancies and a major cause of cancer‑related mortality among men worldwide. Despite widespread use of prostate‑specific antigen testing, current diagnostic approaches suffer from low specificity and limited ability to distinguish between indolent and aggressive disease, resulting in overdiagnosis and overtreatment. Advances in molecular biology, genomics and metabolomics have led to the identification of novel biomarkers that have potential for improving the precision of prostate cancer diagnosis, prognosis and therapy. The present review provides a comprehensive overview of emerging prostate cancer biomarkers, including genetic (such as BRCA1/2, HOXB13 and PTEN), RNA‑based (such as PCA3 and miRNAs), metabolic (such as citric acid and polyamines) and methylation markers (such as GSTP1, APC and RASSF1A). These biomarkers not only enhance diagnostic accuracy but also facilitate risk stratification, prediction of therapeutic response and real‑time disease monitoring through liquid biopsy technologies. Moreover, integrating multi‑omics data with artificial intelligence and machine learning may further improve early detection and personalized treatment strategies. Overall, the development and clinical implementation of these biomarkers represent a transformative step toward precision medicine in prostate cancer, enabling earlier diagnosis, optimized therapy selection and improved patient outcomes.
- New
- Research Article
- 10.1016/j.jad.2025.120569
- Feb 1, 2026
- Journal of affective disorders
- Ambre Marie + 6 more
Acoustic and machine learning methods for speech-based suicide risk assessment: A systematic review.
- New
- Research Article
- 10.1016/j.ijmedinf.2025.106158
- Feb 1, 2026
- International journal of medical informatics
- Valentina Ivanovic + 3 more
Artificial intelligence methods in gestational diabetes mellitus prediction: A systematic literature review.
- New
- Research Article
- 10.37082/ijirmps.v14.i1.232923
- Feb 1, 2026
- International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences
- Ganesh Chandrasekaran
Finite Element Analysis (FEA) is a foundational tool in mechanical engineering, enabling the prediction of structural behavior under complex loading, thermal, and environmental conditions. As mechanical systems grow in complexity and design cycles shorten, traditional FEA workflows face increasing pressure to deliver faster, more accurate, and more adaptive simulations. Artificial Intelligence (AI) and machine learning (ML) are emerging as transformative technologies that enhance FEA capability, accelerate learning curves, automate model development, and improve decision making in structural engineering. This paper explores the integration of AI into FEA learning and development, highlighting its impact on model generation, mesh optimization, material behavior prediction, simulation acceleration, and engineering education. It also discusses future directions where AI driven FEA will enable autonomous design, real time digital twins, and intelligent structural health monitoring.
- New
- Research Article
- 10.1002/ddr.70220
- Feb 1, 2026
- Drug development research
- Hemlata Naykwadi + 1 more
The epidermal growth factor receptor (EGFR) is a key target in cancer therapy, mainly in non-small cell lung cancer (NSCLC). Though, the efficacy of EGFR-targeted therapies is limited by the development of resistance. This comprehensive review details the structural biology of EGFR and its role in oncogenic signaling, elucidating the major activating mutations, particularly exon 19 deletions and L858R point mutations, and acquired resistance. The progressive development of EGFR tyrosine kinase inhibitors (TKIs), from first-generation ATP-competitive inhibitors (e.g., gefitinib, erlotinib) to third-generation covalent agents (e.g., osimertinib) and emerging fourth-generation allosteric and degradation approaches, are critically examined for their mechanisms, efficacy, and clinical limitations. We have also discussed about the intrinsic and acquired resistance mechanisms, including alternative oncogenic drivers (KRAS, ALK), bypass pathway activations (MET, HER2), and phenotypic changes like epithelial-mesenchymal transition. Additionally, we emphasize the role of computational modeling, high-throughput SAR studies, and preclinical models, including patient-derived xenografts and organoids, in guiding rational drug design. Emerging approaches integrating artificial intelligence, machine learning, and precision oncology hold potential to accelerate EGFR-targeted drug discovery. The combination strategies with immunotherapy, and anti-angiogenic agents are considered in the context of improving patient outcomes. Together, ongoing advances in understanding EGFR signaling and resistance mechanisms are driving the development of next-generation inhibitors and personalized therapies, with the ultimate goal of overcoming drug resistance and improve patient outcomes in EGFR-mutant cancers.
- New
- Research Article
- 10.1016/j.semcancer.2026.01.002
- Feb 1, 2026
- Seminars in cancer biology
- Maddison Mckenzie + 6 more
Integrative spatial omics and artificial intelligence: transforming cancer research with omics data and AI.
- New
- Research Article
- 10.1016/j.nexres.2025.101257
- Feb 1, 2026
- Next Research
- Sunil Kumar Srivastava + 2 more
Advances in synthesis and characterization of bionanomaterials by using artificial intelligence and machine learning techniques: A critical review
- New
- Research Article
- 10.22214/ijraset.2026.76848
- Jan 31, 2026
- International Journal for Research in Applied Science and Engineering Technology
- Prof Pallavi Thakare
In order to combat the existence of digital echo chambers, this article presents ”ThinkVerse,” an AI-driven content moderation and awareness platform. integrating explainable artificial intelligence, machine learning, and natural language processing. ThinkVerse empowers individuals, educators, and organizations by transparently and instantly analyzing web information for bias, sentiment, and ideological polarity. Many of the traditional subjectivity, disinformation, and algorithmic personalization problems present in the field are resolved by anchoring advanced data analytics with contextual counternarrative development. The system architecture, fundamental AI techniques, and workflow innovations that serve as the foundation for striking a balance between content suggestion and user awareness visualization are described in the review. It draws attention to ThinkVerse’s contributions to a new generation of digital literacy by bridging the gaps between responsible information consumption and ethical AI frameworks, creating a technology ecosystem that supports critical thinking, openmindedness, and intellectual diversity
- New
- Research Article
- 10.61336/jiclt/26-01-39
- Jan 30, 2026
- Journal of International Commercial Law and Technology
The exponential growth of artificial intelligence (AI) and machine learning algorithms has fundamentally altered the functioning of modern markets, particularly within digital and e-commerce ecosystems. While these technologies enhance operational efficiency, market transparency, and consumer access, they also introduce unprecedented challenges for competition law enforcement. This paper examines the impact of artificial intelligence and machine algorithms on the competition law framework in India, with specific emphasis on algorithmic price-fixing, discriminatory pricing, personalized advertising, and autonomous or tacit collusion. The study adopts a doctrinal and analytical research methodology, analysing statutory provisions of the Competition Act, 2002, relevant case laws, and comparative international developments to evaluate the adequacy of India’s existing competition regime in addressing AI-driven anti-competitive conduct. It explores how self-learning algorithms, operating without explicit human coordination, may distort market outcomes by facilitating parallel pricing behaviour, reducing consumer choice, and exploiting asymmetries in data and information. A balanced approach is essential to ensure that technological innovation promotes consumer welfare without undermining the principles of free and fair competition in India’s evolving digital economy.
- New
- Research Article
- 10.1038/s41598-026-36162-5
- Jan 30, 2026
- Scientific reports
- Ali Rahimnezhad + 5 more
People's eating habits are influenced by psychological, social, cultural, and behavioral factors. Research shows that certain personality types expose people to risky eating behaviors. Given the complexity of nutrition-related factors and the limitations of traditional statistical methods, the use of new approaches such as artificial intelligence and machine learning can play an effective role in analyzing multidimensional data and identifying complex patterns. This cross-sectional pilot study aimed to predict food addiction among university students by integrating demographic, anthropometric and personality data with machine learning methods. The data consisted of 210 samples, which were first preprocessed to ensure data quality and integrity. Tomek Links and SMOTE techniques were used to remove class imbalance. Feature selection was performed using the twelve different algorithms to identify the most important features related to food addiction prediction. Then, ten different machine learning models were implemented, including Logistic Regression (LR), K-Nearest Neighbors (KNN), Gaussian Naive Bayes (GNB), Support Vector Classifier (SVC) with probability estimation, Decision Tree (DT), Random Forest (RF), AdaBoost, Gradient Boosting Classifier (GBC), CatBoost and LightGBM. The models were trained on the training dataset and their performance was evaluated using the accuracy, precision, recall, F1-Score and AUC metrics on the test dataset. In addition, the SHAP (SHapley Additive exPlanations) method was used to analyze the importance of features and interpret the advanced models to determine the impact of each psychological and behavioral feature on the prediction of food addiction. The results showed that more advanced models, especially ensemble methods such as Random Forest and CatBoost, have high power in identifying complex patterns and accurately predicting food addiction behaviors. SHAP analysis also showed that psychological characteristics such as feelings of worthlessness, impulsivity, anger, psychological distress, rigid cognitive styles, weight and height, body mass index (BMI) were related the most important factors affecting prediction. Although limitations such as small sample size, focusing on a specific student population, and the use of self-report instruments reduce the generalizability of the results, the innovation of this study in combining psychological and artificial intelligence approaches for early identification of high-risk individuals is remarkable. Overall, the integration of personality profiles with advanced computational models can form the basis for the development of artificial intelligence-based screening tools and targeted interventions to improve nutritional behaviors in young populations.
- New
- Research Article
- 10.1007/s44196-026-01164-8
- Jan 30, 2026
- International Journal of Computational Intelligence Systems
- Kunal Hiwase + 5 more
A Review on the Applications and Implications of Artificial Intelligence and Machine Learning in Oncology