Articles published on Search model
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5753 Search results
Sort by Recency
- New
- Research Article
- 10.9734/ajeba/2026/v26i22186
- Feb 21, 2026
- Asian Journal of Economics, Business and Accounting
- Mohamed M El-Gibaly
The main objective of this work is to balance value analysis and value engineering to achieve cost reduction. This work investigates cost reduction efforts through production and design cells. A quantitative model is formed to investigate the operative and non-operative integration of value analysis and value engineering approaches. The first part investigates variance analysis for controlling cost and cost reduction. The second part relates to value analysis and cost reduction within production cells. Regarding value engineering, which aims at reducing costs generally without reducing quality or effectiveness, and the value analysis approach, the former is important in the design phase, while the latter is necessary before implementation. The model was initially designed using the game theory to identify negotiating parties from production and design cells during negotiations on the proposed cost of new or existing products. However, the discussion showed it is difficult to end the negotiation at a certain point using only game theory. Accordingly, search theory models were used as complementary models, helping to reach an appropriate strategy to reduce cost and determine a point of abstention. Various recommendations were suggested at the end of the work.
- New
- Research Article
- 10.1002/chem.202503630
- Feb 18, 2026
- Chemistry (Weinheim an der Bergstrasse, Germany)
- Daniil A Boiko + 1 more
The rapid adoption of artificial intelligence (AI) and machine learning (ML) in chemistry coincides with increasing structural pressures on academic research, including funding constraints, talent competition, and changing attitudes toward scientific careers. In this Perspective, we argue that this combination of trends may reshape how and by whom chemical knowledge is produced, rather than simply increase research productivity. We discuss recent developments in the automation of experimentation and self-driving labs, ML-based modeling and digital twins, and the use of large language models for literature search, manuscript preparation, and review, and place them against the current financial and social pressures on universities. We outline these trends in the hope of softening the transition for the chemical research community and urging researchers, institutions, and funders to make their research ecosystems more resilient. Finally, we discuss possible shifts in the composition and structure of research groups and in the balance between universities, industry, and government laboratories, raising the central question: who will produce chemical knowledge in the research landscape changed by the wider adoption of AI technologies?
- New
- Research Article
- 10.3758/s13423-025-02772-9
- Feb 17, 2026
- Psychonomic bulletin & review
- David E Kieras + 1 more
This article concerns simple visual-search tasks that require people to respond "yes" or "no" about whether a specified target object is present in stimulus displays containing relatively small numbers of typically simple objects. The currently most popular cognitive theories regarding human performance in these tasks claim that a person's response time depends on the number of shifts of covert visual attention required to choose the response. Such theories provide no significant roles for cognitive task strategies, eye movements, and early-vision limitations (e.g., lower visual resolution and increased crowding effects for displayed objects with greater retinal eccentricity). In contrast, the present research used the EPIC computational cognitive architecture to construct precise simulation models that rely on these more basic mechanisms without assuming any role for covert attention. Results from the simulations show that models systematically incorporating early-vision limitations, eye movements, and parsimonious cognitive task strategies may suffice to account precisely for both the speed and accuracy of human performance during simple visual search. These models succeed at fitting not only empirical data aggregated across participants but also data from different subsets of individual participants who had similar visual parameter values and task strategies. Thus, it appears that covert-attention shifting is not necessary to explain simple visual search. Future models of visual search can be made more veridical and complete by avoiding ill-defined concepts of attention and instead further developing theories of visual mechanisms, task strategies, and motor mechanisms to explain empirical phenomena.
- New
- Research Article
- 10.1371/journal.pone.0339117
- Feb 13, 2026
- PLOS One
- Jinhan Liu + 2 more
In the search and rescue operation of the submersible, to better search for the missing or faulty submersible, taking the marine environment simulated by the HYCOM model as the sample, it is necessary to use the Kalman filter model to predict the time location of the submersible and provide information support for the follow-up search and rescue operations according to the position information transmitted to the main ship when the submersible is running normally. Monte Carlo simulation is used to quantitatively analyze the probability of the possible area of the submersible in four possible cases after the fault, to obtain the location of the initial search deployment point, that is, the minimum plane projection area of the covering sample. Python software was used to quantitatively analyze the probability of finding the submersible with the passage of time and cumulative search results. Moreover, we conducted a comparative analysis of the method proposed in this paper with previous methods to illustrate the advancement of the method proposed in this paper. By introducing the nearest neighbor correlation algorithm into the multi-target tracking algorithm, the motion position of multiple submersibles in the same area can be predicted.
- Research Article
- 10.1111/manc.70036
- Jan 31, 2026
- The Manchester School
- Joaquín Naval + 1 more
ABSTRACT This paper explores the quantitative role played by Employment Protection Legislation over employment and on‐the‐job training in the presence of dual labor markets across OECD economies. We extend the search and matching model by introducing formal education and investment decisions in training by firms, and a gap in firing costs between fixed‐term and unlimited contracts. The quantitative analysis shows that the model accounts well for cross‐country variation in employment and reasonably well for productivity, while it captures only a modest share of the dispersion in training and temporary employment. Decomposing the sources of heterogeneity, we find that differences in firing costs explain only 2%–20% of the observed cross‐country dispersion, whereas education accounts for most of the variation in employment and productivity, and also explains part of the cross‐country differences in training.
- Research Article
- Jan 29, 2026
- ArXiv
- Francesco Boccardo + 2 more
We address the problem of how individuals can integrate efficiently their private behavior with information provided by others within a group. To this end, we consider the model of collective search introduced in [https://doi.org/10.1103/PhysRevE.102.012402], under a minimal setting with no olfactory information. Agents combine a private exploratory behavior and a social imitation consisting in aligning to their neighbors, and weigh the two contributions with a single ``trust" parameter that controls their relative influence. We find that an optimal trust parameter exists even in the absence of olfactory information, as was observed in the original model. Optimality is dictated by the need to explore the minimal region of space that contains the target. An optimal trust parameter emerges from this constraint because it it tunes imitation, which induces a collective mechanism of inertia affecting the size and path of the swarm. We predict the optimal trust parameter for cohesive groups where all agents interact with one another. We show how optimality depends on the initialization of the agents and the unknown location of the target, in close agreement with numerical simulations. Our results may be leveraged to optimize the design of swarm robotics or to understand information integration in organisms with decentralized nervous systems such as cephalopods.
- Research Article
- 10.1093/ej/ueag012
- Jan 24, 2026
- The Economic Journal
- Juan J Dolado + 2 more
Abstract We develop an equilibrium search model of the coexistence of regular and flexible work arrangements, calibrated to evaluate the U.K.’s zero-hours contract (ZHC). Our findings reveal mixed equilibrium and welfare effects. ZHCs stimulate job creation among firms facing highly volatile business conditions, increasing total employment but potentially reducing regular jobs. Simultaneously, ZHCs boost labour force participation by attracting individuals who prefer flexible work schedules, which may create more congestion for regular jobs in a random-search environment. Our calibration demonstrates that the macroeconomic consequences of alternative work arrangements depend crucially on the job creation margin and on workers’ valuation of flexibility.
- Research Article
- 10.1111/iere.70054
- Jan 23, 2026
- International Economic Review
- Erlend Eide Bø
ABSTRACT How do rental investors affect housing price dynamics? I develop a search model that allows housing owners to invest in rental housing. The model matches the high investor share and housing price increase observed in a housing boom in Oslo, Norway, while featuring increasing price‐to‐rent, and correlation of the buy‐to‐let share with housing price growth. In the model, an exogenous shock to population inflow increases demand for both owned and rented housing. Increased rental demand induces more buy‐to‐let investors to enter the market, adding extra demand to the housing market. Search frictions are important to explain an increasing price‐to‐rent ratio.
- Research Article
- 10.3390/universe12010027
- Jan 19, 2026
- Universe
- Li Han + 4 more
With the rapid expansion of pulsar survey data driven by advanced radio telescopes such as FAST, automated detection methods have become crucial for the efficient and accurate identification of single-pulse signals. A key challenge in this task is the extreme class imbalance between genuine pulsar pulses and radio frequency interference (RFI), which significantly hampers classifier performance—particularly in low signal-to-noise ratio (S/N) environments. To address this issue and improve detection accuracy, we propose Pulsar-WRecon, a Wasserstein GAN with Gradient Penalty (WGAN-GP)-based framework designed to generate realistic single-pulse profiles. The synthetic samples generated by Pulsar-WRecon are used to augment training data and alleviate class imbalance. Building upon the enhanced dataset, Convolutional Kolmogorov–Arnold Network (CKAN) is further introduced as a novel hybrid model that integrates convolutional layers with KAN-based functional decomposition to better capture complex patterns in pulse signals. On the three-channel pulsar images from the HTRU1 dataset, our method achieves a recall of 97.5% and a precision of 98.5%. On the DM time series image dataset, FAST-DATASET, it achieves a recall of 93.2% and a precision of 92.5%. These results validate that combining generative data augmentation with an improved model architecture can effectively enhance the precision of single-pulse detection in large-scale pulsar surveys, especially in challenging, real-world conditions.
- Research Article
- 10.1007/s40751-025-00189-6
- Jan 13, 2026
- Digital Experiences in Mathematics Education
- Rowena Merkel + 3 more
Abstract Inquiry-based learning has been shown to foster conceptual understanding through cycles of hypothesis generation and testing, making it particularly relevant for conceptual change when prior knowledge conflicts with new concepts. While inquiry-based learning has been extensively applied in science education, its use in mathematics is still developing. For example, it is often connected to exploratory tasks or problem-solving phases prior to instruction (PS-I). The present study aims to address this gap by investigating whether digital inquiry, including explicit cycles of experimentation, can foster conceptual change in the transition from natural to rational numbers. Using the Scientific Discovery as Dual Search (SDDS) model, we designed a digital learning environment that enables students to generate hypotheses, create visual representations with dynamic fraction bars, test their hypotheses through experimentation, and revise their reasoning based on feedback. We examined the impact of prompts that required the use of digital tools for generating and manipulating dynamic fraction bars, as well as providing empirical feedback on students’ understanding of fractions in two contexts: basketball and color mixing. In an experiment with 231 fifth graders, we found a significant indirect effect of prompts to use the digital tools on the post-test, mediated by the quality of an external visual representation generated with dynamic fraction bars and verbal reasoning. Additionally, there was a positive and significant indirect effect of context on the post-test, favoring the basketball context, mediated by the quality of verbal reasoning. However, no effect of empirical feedback was found. The findings of this study suggest that using dynamic fraction bars and familiar contexts leads to more elaborated learning activities and helps students to shift from natural number concepts to understanding fractions.
- Research Article
- 10.1186/s42400-025-00540-9
- Jan 12, 2026
- Cybersecurity
- Yunong Wu + 6 more
Abstract Boomerang attack serves as a potent cryptanalytic tool for assessing the security of block ciphers. Over the past few years, various automatic search models for boomerang distinguishers have been proposed for block ciphers with different structures. This paper presents improved Mixed-Integer Linear Programming (MILP)-based search models for both single-key and related-key boomerang distinguishers. In the single-key scenario, we propose a method for dynamic allocation of active S-boxes. Our search model for single-key boomerang distinguishers characterizes the distinguisher probability more accurately, addressing the suboptimality issue caused by non-fixed weight assignments in prior models. In the related-key scenario, a search model for related-key boomerang distinguisher is proposed for block ciphers with bit-level key schedule algorithms, where the probability of the boomerang switch is ensured to be 1. To validate the effectiveness of our models, we apply them to the lightweight block cipher LILLIPUT based on Extended Generalized Feistel Networks (EGFN), conducting a comprehensive security analysis against boomerang attacks. Using our models, we successfully derive single-key boomerang distinguishers for 8 to 13 rounds and a 15-round related-key boomerang distinguisher. Notably, the data complexity required for 13-round single-key distinguishing attack is reduced by $${2^{ 3.172}}$$ 2 3.172 , and the 15-round related-key boomerang distinguisher with a probability of $${2^{ - 58}}$$ 2 - 58 is currently the longest-round distinguisher among all known distinguishers for LILLIPUT. The application results fully demonstrate the capability of our models in evaluating the security of block ciphers. This research not only provides new insights and methods for the design and analysis of lightweight block ciphers, but also deepens the understanding of the security characteristics for LILLIPUT.
- Research Article
- 10.3390/su18020759
- Jan 12, 2026
- Sustainability
- Natalia Bakhtadze + 4 more
The article presents an approach to synthesizing artificial intelligence agents (AI agents), in particular, control and decision support systems for process operators in various industries. Such a system contains an identifier in the feedback loop that generates digital predictive associative search models of the Just-in-Time Learning (JITL) type. It is demonstrated that the system can simultaneously solve (outside the control loop) two additional tasks: online operator pre-training and mutual adaptation of the operator and the system based on real-world production data. Solving the latter task is crucial for teaching the operator and the system collaborative handling of abnormal situations. AI agents improve control efficiency through self-learning, personalized operator support, and intelligent interface. Stabilization of process variables and minimization of deviations from optimal conditions make it possible to operate process plants close to constraints with sustainable product qualities. Along with higher yield of target product(s), this reduces equipment wear and tear, utilities consumption and associated harmful emissions. This is the key merit of Model Predictive Control (MPC) systems, which justify their application. JITL-type models proposed in the article are more precise than conventional ones used in MPC; therefore, they enable the operation even closer to process constraints. Altogether, this further improves the reliability of production systems and contributes to their sustainable development.
- Research Article
- 10.62056/akp2tx4e-
- Jan 8, 2026
- IACR Communications in Cryptology
- Mathieu Degré + 2 more
The meet-in-the-middle (MITM) attack is a powerful cryptanalytic technique leveraging time-memory tradeoffs to break cryptographic primitives. Initially introduced for block cipher cryptanalysis, it has since been extended to hash functions, particularly preimage attacks on AES-based compression functions. Over the years, various enhancements such as superposition MITM (Bao et al., CRYPTO 2022) and bidirectional propagations have significantly improved MITM attacks, but at the cost of increasing complexity of automated search models. In this work, we propose a unified mixed integer linear programming (MILP) model designed to improve the search for optimal pre-image MITM attacks against AES-based compression functions. Our model generalizes previous approaches by simplifying both the modeling and the corresponding attack algorithm. In particular, it ensures that all identified attacks are valid. The results demonstrate that our framework not only recovers known attacks on AES and Whirlpool but also discovers new attacks with lower memory complexities, and new quantum attacks.
- Research Article
- 10.56294/digi2026309
- Jan 6, 2026
- Diginomics
- Anil Kumar Sinha + 3 more
With the exponential growth of web-based content, efficient retrieval of contextually relevant textual information starting from seed URLs has become a critical challenge in web content mining and information retrieval. Traditional crawling and search methods—such as breadth-first search (BFS), depth-first search (DFS), best-first (focused crawling), topic-sensitive PageRank, and context-graph models—typically suffer from limitations such as parameter tuning overhead, lack of contextual understanding, requirement of large training datasets, high computational cost, and the need for specialised infrastructure. This research presents a comprehensive comparative study of multiple search and crawling models applied to textual retrieval from seed URLs, with a particular focus on their performance in diverse web‐structures (static vs dynamic) and content types. Employing a unified experimental framework implemented in Python with MySQL backend, we evaluate each algorithm using standard performance metrics (precision, recall, F1-score) alongside newer metrics such as coverage, relevance score, search time, memory usage, throughput and harvest rate. Machine-learning enabled variants (for example semantic-BFS and semantic-DFS using transformer-based embeddings) are also incorporated to assess their value over purely structural methods. Our results demonstrate that while semantic-enhanced BFS (Semantic-BFS) yields higher coverage, better relevance and faster response time in many scenarios, it shows limitations in classical metrics like precision/recall/F1 when ground-truth labels are inadequate for semantic relevance. The study provides insights into algorithmic trade-offs, suitability for different web architectures, and proposes hybrid strategies for next-generation crawlers and retrieval systems. The findings contribute toward the design of more adaptive, semantic-aware, and scalable web content mining frameworks.
- Research Article
- 10.1016/j.inffus.2025.103489
- Jan 1, 2026
- Information Fusion
- Zexu Lin + 5 more
Hierarchical Mixture-of-Experts model for unified code search
- Research Article
- 10.17705/1jais.00961
- Jan 1, 2026
- Journal of the Association for Information Systems
- Kai Li + 3 more
As search engines are leading revenue growth in online marketing, search marketing has become a popular area of academic research. Although search engine advertising has interested researchers for decades and much has been learned, one thing that puzzles scholars is why search engine optimization companies are tolerated rather than excluded from the market, even though they capture a significant share of the advertising market. In this paper, we shed light on this phenomenon and establish an analytical model based on organic search quality. Through analysis of the model, we were able to draw several intriguing conclusions. First, there is no strictly positive correlation between advertisers’ willingness to pay and the click price of paid search marketing. In other words, the click price may decrease as advertisers’ willingness to pay increases. Secondly, improving the effectiveness of a search engine has the potential to attract more searchers, but it may also result in a decline in the search engine’s profits. Finally, a search engine may achieve higher profits by allowing search engine optimization firms to remain in the market rather than driving them out. We discuss our contribution to search engine marketing and provide implications for search engines, search engine optimization firms, and advertisers.
- Research Article
- 10.1088/2631-8695/ae37ca
- Jan 1, 2026
- Engineering Research Express
- Yide Qian + 3 more
Abstract To address the problems of poor convex quality in single-neighborhood search and inferior controllability during multi-convex-shape transitions for automated vehicles, a dynamic trajectory planning method based on Variable Differential Neighborhood Search (VDNS) is proposed. A differential neighborhood model for automated driving is established. By integrating the Maximum Volume Inscribed Ellipse method into the differential neighborhood search model, the maximum differential neighborhood at the current moment is obtained. An evaluation function encompassing longitudinal distance, lateral deviation, and safety margin is designed to dynamically select the optimal differential neighborhood for the next moment. Trajectory smoothness induced by the variable differential neighborhood search is optimized using a barrier function, and the switching stability index for the variable differential neighborhood search is derived using the average dwell-time method, ensuring controllable multi-convex-shape transitions. Experimental results demonstrate that, compared to the single-neighborhood search method, the proposed VDNS algorithm significantly improves the performance of convex space heuristic search in both dynamic and static environments, increasing the drivable convex space coverage by 20% and 7%, respectively. During two consecutive obstacle avoidance maneuvers, the convergence times for switching stability are controlled within 0.6 s and 0.4 s, with maximum velocity overshoots of only 1% and 2.94%. The results indicate that the VDNS algorithm effectively enhances trajectory generation quality while maintaining strong controllability in environments with both dynamic and static obstacles.
- Research Article
- 10.1109/tie.2025.3642259
- Jan 1, 2026
- IEEE Transactions on Industrial Electronics
- Fei Xu + 3 more
Multiobjective Optimization With Minimum Output Error and Maximum Efficiency for WPT Systems Based on Discrete Neighborhood Search–Model Predictive Control Scheme
- Research Article
- 10.5194/isprs-archives-xlviii-1-w6-2025-199-2025
- Dec 31, 2025
- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Juliana Lyn Satore + 4 more
Abstract. The maritime Search and Rescue (SAR) operation requires effective and accurate object detection systems capable of identifying various targets in dynamic sea environments and low-light situations. The paper presents a comparative study of the YOLOv10, YOLOv11, and YOLOv12 networks in multi-class marine detection using UAV images. The SeaDronesSee Odv2 dataset has been preprocessed using physics-based augmentation that mimics environmental changes, such as fog, noon, sunset, dawn, and cloudy scenarios. A multi-resolution tiling procedure was implemented to preserve the image consistency of small objects. Results show that YOLOv11s is the model that has the least accuracy-efficiency trade-off, with an mAP@0.5 of 0.888 and an F1-Score of 0.872 at a reasonable inference time. Precision-recall analysis has shown that large maritime objects were detected with high precision, while small objects were detected with average recall. The results show that multi-resolution preprocessing, as well as physicsbased augmentation, enhance the robustness and generalization of the network. Altogether, YOLOv11s is the most stable version to use in real-time maritime SAR missions with UAVs due to its ability to handle a variety of visual conditions.
- Research Article
- 10.59835/2413-5372.2025.3-4/324-346
- Dec 30, 2025
- Herald of criminal justice
- Vasyl Volodymyrovych Yermak
The article provides a comprehensive analysis of the detection and documentation of espionage at enterprises of Ukraine’s defence-industrial complex (DIC) as a continuous process in which the initial identification of indicators of an encroachment must be immediately projected onto the procedural perspective of the information obtained. The aim is to substantiate an evidence-ori-ented model of operational search and operational-search documentation, taking into account the predominance of digital traces, the security-regime sensitivity of the DIC environment, and height-ened standards of judicial scrutiny of admissibility. The methodological basis comprises normative and formal-logical approaches, forensic modelling of the modus operandi (preparation–execution–concealment), as well as a risk-oriented analysis of «nodes» of evidence loss within digital and mixed evidentiary sets. It is shown that the effectiveness of counteraction is determined not so much by the volume of information obtained as by its reproducibility, traceable provenance, capacity for independent ver-ification, and integration into an adversarial procedure. A refinement of the conceptual framework (factual data, operational-search information, materials of operational-search activity (OSA), OSA documents) is proposed, and typical procedural defects in their conflation are identified, which gener-ate gaps in reconstructing data provenance and increase the risks of inadmissibility. Scientific novelty lies in substantiating an operational matrix for the transition from factual data to procedural sources of evidence and in proposing an «evidence passport» as a standardized instrument for documenting the trajectory of an evidentiary object, key environmental parameters, access logging, and integrity assurance procedures. It is proven that in espionage cases within the DIC, judicial control performs not only a safeguard function but also an evidence-forming one: it must verify the proportionality of interferences, the boundaries of access to digital contours, the correctness of procedural integration of OSA and covert investigative (search) actions (CISA) results, and the reproducibility of procedures. The practical significance of the findings consists in formulating recommendations aimed at minimizing procedural losses in court under martial law, when the risks of infrastructure degradation, abnormal access re-gimes, and rapid loss of digital artefacts increase.