Articles published on Web access
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2584 Search results
Sort by Recency
- New
- Research Article
- 10.1007/s00404-026-08353-y
- Mar 11, 2026
- Archives of gynecology and obstetrics
- Xiao Wang + 3 more
Early identification of gestational diabetes mellitus (GDM) is critical for mitigating adverse maternal and neonatal outcomes. Existing prediction models face limitations in clinical utility due to inconsistent variable selection and reliance on impractical biomarkers. This study aimed to develop and validate a resource-efficient GDM prediction model using routinely available first-trimester clinical indicators and deploy it as an open-access web tool. A retrospective cohort of 1818 pregnancies from a Shanghai tertiary hospital (2023) was randomly divided into training (70%) and validation (30%) sets. Three predictor screening strategies (traditional logistic regression, least absolute shrinkage and selection operator (LASSO) regression with 1SE rule, and LASSO-MIN rule) were compared. The model performance was assessed by the area under the receiver operating characteristic (ROC) curves (AUC), the calibration curve, the clinical decision curve (DCA) and the clinical impact curve (CIC). The optimal model was visualized as a nomogram and deployed as an open access web calculator. The LASSO-1SE model achieved the best balance of accuracy and simplicity, with an AUC of 0.717 (95% CI 0.681-0.753), sensitivity 69.7%, specificity 64.9%, and high positive predictive value (PPV = 92.3%).The model showed robust calibration (Hosmer-Lemeshow P > 0.3) and clinical utility across risk thresholds in DCA and CIC. A nomogram and an open-access web calculator ( https://wangxiao0922.shinyapps.io/20250309/ ) were developed for risk stratification. This resource-efficient tool enables early GDM risk stratification using routine clinical variables, supporting timely intervention in diverse healthcare settings.
- New
- Research Article
- 10.1145/3799421
- Mar 4, 2026
- Journal on Computing and Cultural Heritage
- Roland Filzwieser + 10 more
The bITEM project carried out at the Natural History Museum Vienna (NHMW) and the Vienna Institute for Archaeological Science (VIAS) of the University of Vienna explores new ways of digitally documenting and disseminating museum objects, their contexts and biographies. With the aim of enhancing access and interpretation, the project addresses challenges of presenting complex histories of museum objects beyond physical displays. Using an interdisciplinary approach, bITEM combines methods and concepts such as 3D scanning and material semiotics with CIDOC CRM and the open-source database system OpenAtlas. The project focuses on five diverse case studies from the NHMW's collection, offering insights into how these objects’ spatial, social, and chronological contexts can be mapped and presented more holistically. The resulting openly accessible web application ( bitem.at ) provides interactive tools, 3D models, timelines, and network graphs. The study demonstrates the effectiveness of combining traditional research methods and archival holdings with advanced digital tools to document and visualise cultural as well as natural heritage objects in novel ways. This approach enhances public access while also serving as a valuable research data resource, establishing a model for future projects in digital humanities and cultural heritage preservation that could select and instrumentalise parts of these tools according to their own prerequisites and requirements. Thus the bITEM project highlights the potential for interdisciplinary, digital approaches to enrich the understanding and accessibility of museum objects in general.
- New
- Research Article
2
- 10.1016/j.csi.2025.104055
- Mar 1, 2026
- Computer Standards & Interfaces
- Milton Campoverde-Molina + 1 more
Artificial intelligence in web accessibility: A systematic mapping study
- New
- Research Article
- 10.1016/j.cose.2025.104807
- Mar 1, 2026
- Computers & Security
- Yuying Du + 7 more
A large-scale measurement study of region-based web access restrictions: The case of China
- New
- Research Article
- 10.1016/j.ijmedinf.2025.106241
- Mar 1, 2026
- International journal of medical informatics
- Hongbing Liu + 7 more
Development and validation of a machine learning model to predict functional outcomes in patients with recent small subcortical infarction.
- New
- Research Article
- 10.1002/anie.202524655
- Feb 21, 2026
- Angewandte Chemie (International ed. in English)
- Galymzhan Moldagulov + 5 more
Understanding how metals coordinate to organic ligands is a precondition for the rational design of metal complexes and catalysts. Whereas certain types of ligands are capable of just one easy-to-predict coordination modality, others may present tens and sometimes even hundreds of coordination options (mono-, bi-, or polydentate), and predicting the correct one may be a challenge even to seasoned chemists. The current paper describes a "hybrid" computational approach in which a Machine Learning, ML, algorithm learns to predict complex coordination patterns using knowledge-based "rules" derived from the Cambridge Structural Database, CSD. This model is applicable to a broad scope of ligands (including hemilabile and haptic ones as well as those with denticity > 6) and different metals at different oxidation states. The algorithm's code is disclosed and can be readily deployed in RDKit via our RDMetallics python-wrapper. It is also deployed as a publicly accessible web portal for demonstration and use.
- New
- Research Article
- 10.1111/ocr.70103
- Feb 18, 2026
- Orthodontics & craniofacial research
- Xiaoqi Zhang + 7 more
In this exploratory pilot study, we profiled human periodontal ligament (PDL) transcriptomes during early orthodontic tooth movement (OTM). Early-stage (0-10 days) transcriptional dynamics under tension and compression remain insufficiently understood, and no dedicated user-friendly resource has been available for exploring large-scale human data. We established tooth extrusion and intrusion models to simulate tension- and compression-side PDL conditions, respectively, and collected human PDL samples within the first 10 days (days 0, 1, 3 and 10; ≤ 4 biological replicates per force-time subgroup). RNA-seq was performed, and five complementary gene selection strategies identified persistent force-specific genes, time-dependent concurrent genes, force-dominant genes, condition-specific markers and integrative network modules. We also developed a Shiny-based web application (HOTM) to integrate the processed data, enabling OTM-related analyses-such as custom comparisons, expression visualisation and correlation assessments-in a simple, code-free environment. Given the pilot design, candidate genes were prioritised using nominal P-values for exploratory screening; multiple-testing-adjusted statistics are provided for reference in the supplementary results and HOTM. Persistent extrusion-upregulated genes (CAR1, ACTB) were consistent with a proliferation-associated program under tension, whereas persistent intrusion-upregulated genes (CXCR6, CCL5) were consistent with immunomodulatory signatures under compression. Concurrently upregulated genes showed a time-related transition from early immune activation (CD28, day 1) to proliferation (MYC, day 3) and metabolic specialisation (GAPDH, day 10). Extrusion-dominant genes (BYSL) suggested controlled growth and metabolic stability, while intrusion-dominant genes (CLDN7) were associated with an osteoclast-favouring remodelling signature. Condition-specific markers suggested a time-dependent progression from early immunoinflammatory cues to advanced tissue reorganisation. Integrative modules pointed to candidate hub genes linking extrusion-associated proliferation (SHCBP1, BIRC5, ARHGAP11A) and intrusion-associated remodelling (ATP6V1A, ATP6V1B2, SNX10) to a core metabolic and protein homeostasis baseline (MRPL47, PSMC2, UBE2V2). This exploratory pilot study presents a hypothesis-generating transcriptomic resource describing early (0-10 day) human PDL responses to orthodontic extrusion and intrusion and a publicly accessible web tool (HOTM) for convenient exploration of candidate genes and pathways. All findings should be interpreted as exploratory and require confirmation in independent, adequately powered cohorts with appropriate multiple-testing correction and orthogonal validation before mechanistic or clinical interpretation.
- Research Article
- 10.63512/sustjst.2024.1004
- Feb 12, 2026
- SUST Journal of Science and Technology (SUST JST)
- Mohammed Raihan Ullah
Web accessibility means removing barriers so that people with disabilities can use technology. Web accessibility errors refer to issues or barriers that prevent people with disabilities from accessing and using websites effectively. This study examined the relationship between web accessibility errors and technological advancement by comparing government and non-government websites in 27 countries across six continents. Various accessibility checker tools were used to analyze 20 websites in each country. The results revealed a moderate correlation between the Global Innovation Index score and the Accessibility Score of government websites, while no such correlation was observed for non-government websites. Regional analysis also highlighted significant variations in web accessibility across continents and countries. African government websites performed poorly in terms of web accessibility errors, while North American non-government websites showed a high prevalence of errors. We believe that our research will provide valuable insights and serve as a foundation for future studies in this field.
- Research Article
- 10.59256/ijrtmr.20260601007
- Feb 8, 2026
- International Journal Of Recent Trends In Multidisciplinary Research
- Dr D Kirubha + 4 more
Campus placement preparation requires consistent practice in quantitative aptitude, logical reasoning, and analytical problem-solving. Traditional preparation methods often lack structure, performance tracking, and real-time feedback, making it difficult for students to evaluate their progress effectively. This paper presents the design and development of an Aptitude Portal integrated with an Android Application, aimed at providing a centralized and interactive platform for aptitude preparation. This system categorizes questions topic-wise and difficulty-wise (Easy, Medium, Hard), enabling progressive learning. It includes time- based quizzes, instant score evaluation, streak tracking, and performance monitoring to encourage consistent practice. The admin panel allows dynamic management of questions using Firebase Firestore, ensuring that content remains updated and relevant. The web application is built using Node.js, Express.js and EJS, while the Android app is developed using Kotlin and XML. By integrating web and mobile access, the system ensures flexibility, accessibility, and scalability. The proposed system enhances student engagement, supports self-assessment, and improves placement readiness through structured and gamified learning.
- Research Article
- 10.11648/j.bmb.20261101.11
- Feb 6, 2026
- Biochemistry and Molecular Biology
- Hung Le
Selection of an appropriate host cell is a critical determinant of success in recombinant protein expression. In practice, host choice is still largely guided by individual experience, ad hoc consultation of the literature, and intuitive decision-making, often resulting in suboptimal expression outcomes and costly cycles of experimental trial and error. Despite several decades of accumulated empirical knowledge in the field, there is currently no systematic, evidence-based framework for forecasting host cell suitability from protein sequence and structural characteristics. The purpose of this study was to develop predictive models that enable rational selection of host cells for recombinant protein expression based on intrinsic protein features. To achieve this, we leveraged collective experimental experience embedded in publicly available structural data. Protein entries from the Protein Data Bank were curated and analyzed, and logistic regression approaches were applied to relate expression outcomes to a range of protein attributes, including structural parameters, stability indices, predicted subcellular localization, and post-translational modification requirements. Using these variables, we constructed and validated statistical models capable of forecasting expression preferences across four commonly used host systems: <i>Escherichia coli</i>, insect cells, mammalian cells, and yeast. Model performance was assessed using internal validation procedures, demonstrating that distinct combinations of protein features are associated with differential expression success among host types. In conclusion, this work provides an evidence-based and quantitative framework for predicting suitable host cells for recombinant protein expression. By translating accumulated empirical knowledge into practical predictive tools, the proposed models reduce reliance on subjective judgment and trial-and-error experimentation. To facilitate broad adoption, the models, together with user guidance, have been implemented in a publicly accessible web server, offering a practical resource to improve experimental efficiency and success rates in protein expression studies.
- Research Article
- 10.1007/s11604-026-01950-6
- Feb 5, 2026
- Japanese journal of radiology
- Ji Su Ko + 4 more
To evaluate the capability of large language models (LLM), specifically GPT-4 and o1, in assessing adherence to the MI-CLEAR-LLM checklist in previously published studies. A total of 159 medical research articles related to LLM applications were analyzed. Two models-GPT-4o and o1-were tested in both text-based and image-based modalities. Structured prompts incorporating reasoning strategies such as chain-of-thought and few-shot learning were used to extract information corresponding to the six core items of the MI-CLEAR-LLM checklist. Human evaluations from a prior study served as the reference standard. Each model was evaluated across three independent trials to assess consistency. Accuracy and inter-trial agreement were calculated for each checklist item. Both GPT-4o and o1 demonstrated high accuracy in extracting objective, explicitly reported items, such as LLM specifications (name, manufacturer, web access, 85.9-100%) and stochasticity parameters (63.6-95%). However, performance declined for context-dependent items, including prompt session handling (Item4, 51.5-70.7%) and test data independence (Item6, 59.6-76.8%). Text-based models generally showed superior inter-trial consistency, with GPT-4o-text achieving the highest Fleiss' kappa (κ = 0.926). In contrast, image-based models exhibited greater variability (κ = 0.402-0.772). LLMs show strong potential for automating the evaluation of reporting quality in medical research, particularly for clearly structured content. However, they still face substantial challenges in extracting context-dependent or inferential information.
- Research Article
- 10.21083/caree.v1i1.9020
- Feb 5, 2026
- Canadian Agri-food & Rural Advisory, Extension and Education Journal
- Lee Beers + 4 more
Artificial intelligence (AI) tools have the potential to aid farmers in making on-farm decisions related to crop production, integrated pest management, nutrient use and application, and other farm management decisions. AI has been developed to increase the accuracy of precision agriculture, and model ideal planting populations based on climate. Many of these specialized AI tools are designed for use at the development level and may not be accessible to end users. Widely available general-purpose AI tools can be used to create crop management recommendations by extracting information from publicly accessible web resources. The reliability and utility of AI produced recommendations are contingent on the quality and credibility of the crawled web data resources, as well as the clarity of the user-defined query prompts. Recommendations developed from these general AI tools can be inaccurate, and/or locally irrelevant to Ohio farmers. To develop an AI-based decision support tool tailored for Ohio soybean farmers, leveraging university extension and research resources that will produce agronomic recommendations aligned with Ohio’s environmental conditions and production systems. The Buckeye Bean BOT was developed using a transformer-based large language model (LLM). To train and inform the model, a curated collection of publicly available resources was compiled including websites, publications, research reports, and other materials related to soybean production in Ohio. These resources were used as the primary data source for the LLM. After the model processed and integrated the information through web crawling, the BOT underwent evaluation by a panel of experts and Ohio State University (OSU) Extension professionals to assess its performance and accuracy.The AI-driven “Buckeye Bean Bot” will be available to Ohio soybean producers accessing Ohio State University resources published across college departments and Ohio State Extension websites.
- Research Article
- 10.1080/0735648x.2026.2621153
- Jan 30, 2026
- Journal of Crime and Justice
- Ryan C Meldrum + 2 more
ABSTRACT The Dark Web has emerged as an important topic of study given that the platform can facilitate criminal behavior. Recently, social scientists have started to examine the behavioral and psychosocial profiles of individuals who access the Dark Web and how they differ from those who do not. Yet, further development of this line of inquiry is warranted given the limited number of criminolgically oriented studies examining the traits, social relationships, and attitudes that may drive self-selection onto the Dark Web. To this end, we analyzed survey data collected on a national sample of U.S. adults (N = 1,750) to investigate whether prior criminal behavior, low self-control, deviant peers, and criminal attitudes are associated with self-reported Dark Web access. In support of our hypotheses, a series of bivariate and multivariate analyses revealed that individuals who report accessing the Dark Web are statistically significantly more likely to have a criminal history, be lower in self-control, associate with more peers who engage in cyber deviance, and hold attitudes more favorable toward larceny, violence, and cyber deviance. Considering these findings, criminologists are encouraged to bring the study of the Dark Web out of the periphery and prioritize it as a research focus.
- Research Article
- 10.1007/s11227-026-08233-x
- Jan 23, 2026
- The Journal of Supercomputing
- Masahito Ohue + 1 more
Abstract The extent to which a drug administered to a mother reaches the fetus is determined by its ability to cross the blood–placental barrier. Accurate knowledge of blood–placental barrier permeability is not only crucial for the development of safe drugs but also provides essential guidance for pharmacotherapy in pregnant women, where safety concerns are paramount. However, experimental evaluation remains challenging because animal models do not adequately recapitulate the human placenta, and human-based approaches such as cord blood analysis or placental perfusion are ethically and technically constrained. In this study, we employ gradient boosting decision trees (GBDT) to construct predictive models of blood–placental barrier permeability with relatively low computational cost. Two endpoints derived from publicly available human data were modeled separately: (i) in vivo log-transformed fetal–maternal blood concentration ratios (logFM), and (ii) ex vivo clearance indices (CI) from placental perfusion experiments. In both cases, our LightGBM-based models achieved higher predictive accuracy and better generalization compared with previous approaches. To facilitate practical use, we implemented a freely accessible web application, PBPredictor ( https://pbpredictor.net ), which provides real-time predictions of logFM and CI from SMILES inputs, along with programmatic access via a REST API. By integrating reliable machine learning with an easy-to-use platform, PBPredictor offers a scalable tool to support safer drug development and evidence-based treatment strategies during pregnancy.
- Research Article
- 10.29408/jit.v9i1.33069
- Jan 20, 2026
- Infotek: Jurnal Informatika dan Teknologi
- Jumawal + 3 more
This study aims to compare the performance of three air humidity sensors—DHT11, DHT22, and 808H5V5—integrated with an ESP8266 microcontroller based on the Internet of Things (IoT). The system was designed to monitor air humidity in real-time through a web server that displays measurement data from the three sensors along with a hygrometer as a reference. Data collection was conducted every hour from 08:00 to 17:00 under stable environmental conditions. The results show that the DHT22 sensor achieved the highest accuracy, with an average deviation of 0.3% compared to the hygrometer. The 808H5V5 sensor demonstrated the fastest response with an average deviation of 0.6%, while the DHT11 sensor had an average deviation of 1.1% and slower response time. All three sensors successfully transmitted real-time data through an accessible web server. Based on these findings, the DHT22 is recommended as the most accurate sensor for IoT-based humidity monitoring systems, while the DHT11 and 808H5V5 remain suitable for simpler applications requiring moderate precision and lower cost.
- Research Article
- 10.1128/msystems.01485-25
- Jan 20, 2026
- mSystems
- Michel Brück + 3 more
Post‑transcriptional regulation is a key control layer in gene expression. Yet, resources integrating antisense RNAs (asRNAs), RNA processing sites, and RNA-protein interactions are scarce for archaeal organisms. Here, we combine multiple RNA‑seq strategies and RIP‑seq to expand the Sulfolobus acidocaldarius transcriptome with 1,048 asRNAs, thousands of transcript processing sites, and the interactomes of the essential RNA chaperones Sm-like archaeal protein (SmAP)1 and SmAP2. Integrating the novel generated data for the re‑analysis of heat‑shock transcriptomics reveals a consistent upregulation of asRNAs and antagonistic expression profiles with their cognate mRNAs. Moreover, our publicly accessible web atlas provides a community platform to explore these datasets and assist in the formulation of new hypotheses about archaeal RNA regulation.
- Research Article
- 10.32855/1930-014x.1166
- Jan 16, 2026
- Fast Capitalism
- Katie Ellis + 1 more
Community Accessibility: Tweeters Take Responsibility for an Accessible Web 2.0
- Research Article
- 10.3389/fnagi.2025.1700771
- Jan 12, 2026
- Frontiers in Aging Neuroscience
- Xinyang Wang + 8 more
IntroductionPost-stroke cognitive impairment (PSCI) is a prevalent and disabling consequence of stroke, yet objective tools for its early identification are lacking. This study aimed to develop and validate an interpretable machine learning (ML) model based on electroencephalography (EEG) to support the early detection of PSCI.MethodsWe conducted a study involving 174 participants, including stroke patients with and without cognitive impairment and age-matched healthy controls. Resting-state EEG was acquired from all subjects, and multidimensional features, including power spectral ratios and microstate parameters, were extracted. Feature selection was performed using LASSO regression, random forest, and the Boruta algorithm. Five machine learning models were evaluated and compared based on their area under the curve (AUC), accuracy, Brier score, calibration plots, and decision curve analysis. Model interpretability was explained using SHAP (Shapley Additive Explanations). The final validated model was deployed as an interactive web-based application.ResultsSeven EEG features were identified as most predictive of PSCI: the delta-plus-theta to alpha-plus-beta ratio (DTABR) in frontal, central, and global regions; the mean microstate duration of classes A and B (A-MMD, B-MMD); the mean frequency of microstate D (D-MFO); and the mean coverage of microstate A (A-MC). The random forest model demonstrated the highest performance (AUC = 0.91, accuracy = 0.83, specificity = 0.88, Brier score = 0.12), alongside satisfactory calibration and a positive net clinical benefit. The model was further validated on an independent external cohort (n = 42), showing robust predictive performance (AUC = 0.97, accuracy = 0.90). An accessible web tool was created for individualized risk prediction (https://eeg-predict.streamlit.app/).DiscussionThe findings suggest that an interpretable EEG-based ML model can provide accurate early screening of PSCI. Integration of this approach into clinical workflows may support personalized rehabilitation strategies and optimize post-stroke care. Future studies are warranted to validate the model in larger, multicenter cohorts.
- Research Article
- 10.55606/jurritek.v5i1.7684
- Jan 3, 2026
- JURAL RISET RUMPUN ILMU TEKNIK
- Faid Rama Daniy + 4 more
The integration of the Internet of Things (IoT) into the Web of Things (WoT) offers cross-platform interoperability but presents significant security challenges for constrained devices. This study aims to evaluate the effectiveness and efficiency of security mechanisms in three major WoT protocols: HTTP, CoAP, and MQTT. The research methodology employs a Systematic Literature Review (SLR) following PRISMA guidelines, reviewing 22 selected articles published between 2020 and 2025. The analysis utilizes PICOC criteria to compare communication overhead, computational consumption, and security mechanisms such as DTLS, OSCORE, and TLS integration. The results indicate that CoAP, combined with OSCORE and EDHOC mechanisms, provides the optimal balance between energy efficiency and end-to-end security for resource-constrained devices. MQTT demonstrates superiority in throughput and data transmission speed but requires additional security layers to ensure data confidentiality. Meanwhile, HTTP dominates in terms of Web service integration and access control, despite having the highest overhead burden. In conclusion, no single protocol is superior for all scenarios; the choice of protocol in WoT architecture must be based on the trade-offs between latency, resource efficiency, and system security requirements
- Research Article
- 10.5753/jis.2026.5357
- Jan 1, 2026
- Journal on Interactive Systems
- Ygor Barros + 6 more
Governments worldwide have increasingly adopted centralized models for delivering public services to citizens. In 2019, Brazil launched the Gov.br portal, which consolidates the digital channels of all federal government agencies and provides unified access to information and services. This study aims to assess the accessibility of the Gov.br portal using three automated evaluation tools (ASES, AccessMonitor, and TAW) and a contrast verification tool (Contrast Checker), in addition to a manual evaluation conducted by a low vision web accessibility specialist. This qualitative and exploratory analysis reveals that, despite Gov.br achieving favorable scores in automated evaluations, significant challenges remain regarding the user experience of individuals with low vision. The most frequent issues identified include the portal’s lack of responsiveness when displayed at maximum zoom on smartphones, insufficient color contrast, and the absence of contextual information in links. As a contribution, the study proposes corrective measures to enhance the website’s accessibility, thereby promoting inclusive access for all users.