Published in last 50 years
Related Topics
Articles published on Deep Inference
- Research Article
- 10.1016/j.asoc.2025.113381
- Aug 1, 2025
- Applied Soft Computing
- Yulong Wang + 5 more
Efficient and privacy-preserving deep inference towards cloud–edge collaborative
- Research Article
- 10.1037/rev0000543
- May 5, 2025
- Psychological review
- Paul A Soden + 3 more
Autistic meltdowns are fits of intense frustration and often physical violence elicited by sensory and cognitive stressors. Despite the high prevalence of meltdowns among autistic individuals, the neural mechanisms that underlie this response are not yet well understood. This has thus far hampered progress toward a dedicated therapeutic intervention-beyond traditional medications-that limits their frequency and severity. Here, we aim to initiate an interdisciplinary dialogue on the etiology of sensory meltdowns. In doing so, we frame meltdowns as a consequence of underlying chronic hypervigilance and acute hyperreactivity to objectively benign stressors driven by differences in the insular cortex-a multimodal integration hub that adapts autonomic state and behavior to meet environmental demands. We first discuss meltdowns through the lens of neurophysiology and argue that intrainsular hypoconnectivity engenders vagal withdrawal and sympathetic hyperarousal in autism, driving chronic hypervigilance and reducing the threshold of stressors those with autism can tolerate before experiencing a meltdown. Next, we turn to neuropsychology and present evidence that meltdowns reflect a difference in how contextual evidence, particularly social cues, is integrated when acutely assessing ambiguous signs of danger in the environment-a process termed neuroception. Finally, we build on contemporary predictive coding accounts of autism to argue that meltdowns may be ultimately driven by differences in sensory attenuation and coherent deep inference within the interoceptive hierarchy, possibly linked to oxytocin deficiency during infancy. Throughout, we synthesize each perspective to construct a multidisciplinary, insula-based model of meltdowns. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
- Research Article
- 10.1088/1475-7516/2025/05/053
- May 1, 2025
- Journal of Cosmology and Astroparticle Physics
- Jason Poh + 5 more
The large number of strong lenses discoverable in future astronomical surveys will likely enhance the value of strong gravitational lensing as a cosmic probe of dark energy and dark matter.However, leveraging the increased statistical power of such large samples will require further development of automated lens modeling techniques.We show that deep learning and simulation-based inference (SBI) methods produce informative and reliable estimates of parameter posteriors for strong lensing systems in ground-based surveys.We present the examination and comparison of two approaches to lens parameter estimation for strong galaxy-galaxy lenses — Neural Posterior Estimation (NPE) and Bayesian Neural Networks (BNNs).We perform inference on 1-, 5-, and 12-parameter lens models for ground-based imaging data that mimics the Dark Energy Survey (DES).We find that NPE outperforms BNNs, producing posterior distributions that are more accurate, precise, and well-calibrated for most parameters.For the 12-parameter NPE model, the calibration is consistently within <10% of optimal calibration for all parameters, while the BNN is rarely within 20% of optimal calibration for any of the parameters.Similarly, residuals for most of the parameters are smaller (by up to an order of magnitude) with the NPE model than the BNN model.This work takes important steps in the systematic comparison of methods for different levels of model complexity.
- Research Article
- 10.1016/j.ymssp.2025.112445
- Apr 1, 2025
- Mechanical Systems and Signal Processing
- S Karthikeyani + 2 more
A framework for detecting high-performance cardiac arrhythmias using deep inference engine on FPGA and higher-order spectral distribution
- Research Article
- 10.1029/2024gl112118
- Jan 17, 2025
- Geophysical Research Letters
- Qi Shao + 6 more
Abstract An interpretable deep inference forecasting model is designed to improve the forecasting capability of sea surface variables. By incorporating the air‐sea coupling mechanism as a dynamic constraint, the interpretability and forecasting performance of the model are improved. More specifically, our findings underscore the critical role of air‐sea interactions in forecasting sea surface variables, especially sea surface temperature (SST) variations induced by tropical cyclones (TCs). Additionally, Liang‐Kleeman information flow (IF), a causal inference method, is introduced to optimize the selection of predictors. Using satellite remote sensing data, our study demonstrates the model's capability in realizing sea surface multivariate forecasts in the South China Sea (SCS) within 10 days. More importantly, the experimental results prove the applicability of the model in both normal and extreme weather conditions, highlighting its effectiveness in enhancing sea surface variables forecasting.
- Research Article
- 10.46298/entics.14870
- Dec 11, 2024
- Electronic Notes in Theoretical Informatics and Computer Science
- Robert Atkey + 1 more
Multiplicative-Additive System Virtual (MAV) is a logic that extends Multiplicative-Additive Linear Logic with a self-dual non-commutative operator expressing the concept of &quot;before&quot; or &quot;sequencing&quot;. MAV is also an extenson of the the logic Basic System Virtual (BV) with additives. Formulas in BV have an appealing reading as processes with parallel and sequential composition. MAV adds internal and external choice operators. BV and MAV are also closely related to Concurrent Kleene Algebras. Proof systems for MAV and BV are Deep Inference systems, which allow inference rules to be applied anywhere inside a structure. As with any proof system, a key question is whether proofs in MAV can be reduced to a normal form, removing detours and the introduction of structures not present in the original goal. In Sequent Calcluli systems, this property is referred to as Cut Elimination. Deep Inference systems have an analogous Cut rule and other rules that are not present in normalised proofs. Cut Elimination for Deep Inference systems has the same metatheoretic benefits as for Sequent Calculi systems, including consistency and decidability. Proofs of Cut Elimination for BV, MAV, and other Deep Inference systems present in the literature have relied on intrincate syntactic reasoning and complex termination measures. We present a concise semantic proof that all MAV proofs can be reduced to a normal form avoiding the Cut rule and other &quot;non analytic&quot; rules. We also develop soundness and completeness proofs of MAV (and BV) with respect to a class of models. We have mechanised all our proofs in the Agda proof assistant, which provides both assurance of their correctness as well as yielding an executable normalisation procedure.- Our technique extends to include exponentials and the additive units.
- Research Article
- 10.1093/logcom/exae064
- Oct 10, 2024
- Journal of Logic and Computation
- Francesca Poggiolesi
Abstract To explain phenomena in the world is a central human activity and one of the main goals of rational inquiry. There are several types of explanation: one can explain by drawing an analogy, as one can explain by dwelling on the causes (see e.g. see [Woodward (2004, Making Things Happen: A Theory of Causal Explanation. Oxford University Press, Oxford)]. Amongst these different kinds of explanation, in the last decade, philosophers have become receptive to those explanations that explain by providing the reasons (or the grounds) why a statement is true; these explanations are often called conceptual explanations (e.g. see [Betti (2010, Explanation in metaphysics and Bolzano's theory of ground and consequence. Logique et analyse, 211:281316)]). The main aim of the paper is to propose a logical account of conceptual explanations. We will do so by using the resources of proof theory, in particular sequent rules analogous to deep inferences ([e.g. see Brunnler (2004, Deep Inference and Symmetry in Classical Proofs. Logoc Verlag)]). The results we provide not only shed light on conceptual explanations themselves, but also on the role that logic and logical tools might play in the burgeoning field of inquiry concerning explanations. Indeed, we conclude the paper by underlining interesting links between the present research and some other existing works on explanations and logic that have arise in recent years, e.g. see [Arieli et al. (2022, Explainable logic-based argumentation. Computational Models of Argument, 353:3243); Darwiche and Hirth (2023, On the (complete) reasons behind decisions. Journal of Logic Language and Information, 32:6388); Piazza, Pulcini, and Sabatini (2023, Abduction as deductive saturation: a proof-theoretic inquiry. Journal of Philosophical Logic, 52:15751602)]. For here it is for the empirical scientist to know the fact and for the mathematical to know the reason why (our emphasis) [Aristotle (1993, Posterior Analytics. Oxford University Press, Oxford)].
- Research Article
- 10.1109/tcc.2024.3399616
- Jul 1, 2024
- IEEE Transactions on Cloud Computing
- Xueyu Hou + 3 more
BPS: Batching, Pipelining, Surgeon of Continuous Deep Inference on Collaborative Edge Intelligence
- Research Article
- 10.1088/1742-6596/2632/1/012019
- Nov 1, 2023
- Journal of Physics: Conference Series
- Xiao Hu + 4 more
In recent years, with the development of sensors, communication networks, and deep learning, drones have been widely used in the field of object detection, tracking, and positioning. However, there are inefficient task execution and some complex algorithms still need to rely on large servers, which is intolerable in rescue and traffic scheduling tasks. Designing fast algorithms that can run on the airborne computer can effectively solve the problem. In this paper, an object detection and location system for drones is proposed. We combine the improved object detection algorithm ST-YOLO based on YOLOX and Swin Transformer with the visual positioning algorithm and deploy it on the airborne end by using TensorRT to realize the detection and location of objects during the flight of the drone. Field experiments show that the established system and algorithm are effective.
- Research Article
10
- 10.1145/3608475
- Sep 26, 2023
- ACM Transactions on Embedded Computing Systems
- Luca Caronti + 4 more
Backing up the intermediate results of hardware-accelerated deep inference is crucial to ensure the progress of execution on batteryless computing platforms. However, hardware accelerators in low-power AI platforms only support the one-shot atomic execution of one neural network inference without any backups. This article introduces a new toolchain for MAX78000, which is a brand-new microcontroller with a hardware-based convolutional neural network (CNN) accelerator. Our toolchain converts any MAX78000-compatible neural network into an intermittently executable form. The toolchain enables finer checkpoint granularity on the MAX78000 CNN accelerator, allowing for backups of any intermediate neural network layer output. Based on the layer-by-layer CNN execution, we propose a new backup technique that performs only necessary (urgent) checkpoints. The method involves the batteryless system switching to ultra-low-power mode while charging, saving intermediate results only when input power is lower than ultra-low-power mode energy consumption. By avoiding unnecessary memory transfer, the proposed solution increases the inference throughput by 1.9× for simulation and by 1.2× for real-world setup compared to the coarse-grained baseline execution.
- Research Article
- 10.24840/2183-6493_009-003_001636
- Apr 28, 2023
- U.Porto Journal of Engineering
- Faraz Bagwan + 1 more
Combating the covid19 scourge is a prime concern for the human race today. Rapid diagnosis and isolation of virus-exposed persons is critical to limiting illness transmission. Due to the prevalence of public health crises, reaction-based blood tests are the customary approach for identifying covid19. As a result, scientists are testing promising screening methods like deep layered machine learning on chest radiographs. Despite their usefulness, these approaches have large computational costs, rendering them unworkable in practice. This study's main goal is to establish an accurate yet efficient method for covid19 predicting using chest radiography pictures. We utilize and enhance the graph-based family of neural networks to achieve the stated goal. The IsoCore algorithm is trained on a collection of X-ray images separated into four categories: healthy, Covid19, viral pneumonia, and bacterial pneumonia. The IsoCore, which has 5 to 10 times fewer parameters than the other tested designs, attained an overall accuracy of 99.79%. We believe the acquired results are the most ideal in the deep inference domain at this time. This proposed model might be employed by doctors via phones.
- Research Article
4
- 10.1016/j.ipm.2023.103376
- Apr 10, 2023
- Information Processing & Management
- Xin Min + 4 more
Multi-channel hypergraph topic neural network for clinical treatment pattern mining
- Research Article
14
- 10.1109/tvt.2022.3202344
- Jan 1, 2023
- IEEE Transactions on Vehicular Technology
- Xiuqi Chen + 4 more
The stability of brake control is an important guarantee for the safety of heavy-duty vehicles (HDVs) at high speeds. However, the electro-hydraulic actuation braking systems often exhibit a significant delay in seconds, which makes braking performance forecasting and control difficult. To address the torque tracking control problem with time delay, a deep inference and control method is proposed. First, a theoretical delay time under different rotating speeds is identified with a data-driven model. Then, a fast end-to-end prediction model is established to estimate the torque performance of the next step with delay information. The deep Q-network (DQN) learning approach is proposed to learn the experimental data by exploring and seeking the optimal control strategy in the time delay environment. A comparative simulation of the proposed DQN-based controller with or without considering time delay, and the rule-based method with or without considering time delay is implemented, and an online processor-in-the-loop (PIL) test with the edge computing device NVIDIA Jetson Xavier NX is performed on the robustness condition. The simulation results and PIL test results demonstrate that the proposed control framework achieves a great improvement in the torque tracking effect with time efficiency.
- Research Article
2
- 10.46298/lmcs-18(4:1)2022
- Oct 21, 2022
- Logical Methods in Computer Science
- Matteo Acclavio + 2 more
In this paper we present a proof system that operates on graphs instead of formulas. Starting from the well-known relationship between formulas and cographs, we drop the cograph-conditions and look at arbitrary undirected) graphs. This means that we lose the tree structure of the formulas corresponding to the cographs, and we can no longer use standard proof theoretical methods that depend on that tree structure. In order to overcome this difficulty, we use a modular decomposition of graphs and some techniques from deep inference where inference rules do not rely on the main connective of a formula. For our proof system we show the admissibility of cut and a generalisation of the splitting property. Finally, we show that our system is a conservative extension of multiplicative linear logic with mix, and we argue that our graphs form a notion of generalised connective.
- Research Article
1
- 10.1145/3545116
- Oct 20, 2022
- ACM Transactions on Computational Logic
- Chris Barrett + 1 more
We design a proof system for propositional classical logic that integrates two languages for Boolean functions: standard conjunction-disjunction-negation and binary decision trees. We give two reasons to do so. The first is proof-theoretical naturalness: The system consists of all and only the inference rules generated by the single, simple, linear scheme of the recently introduced subatomic logic. Thanks to this regularity, cuts are eliminated via a natural construction. The second reason is that the system generates efficient proofs. Indeed, we show that a certain class of tautologies due to Statman, which cannot have better than exponential cut-free proofs in the sequent calculus, have polynomial cut-free proofs in our system. We achieve this by using the same construction that we use for cut elimination. In summary, by expanding the language of propositional logic, we make its proof theory more regular and generate more proofs, some of which are very efficient. That design is made possible by considering atoms as superpositions of their truth values, which are connected by self-dual, non-commutative connectives. A proof can then be projected via each atom into two proofs, one for each truth value, without a need for cuts. Those projections are semantically natural and are at the heart of the constructions in this article. To accommodate self-dual non-commutativity, we compose proofs in deep inference.
- Research Article
8
- 10.1145/3506732
- Sep 30, 2022
- ACM Transactions on Embedded Computing Systems
- Chih-Kai Kang + 4 more
Energy harvesting creates an emerging intermittent computing paradigm but poses new challenges for sophisticated applications such as intermittent deep neural network (DNN) inference. Although model compression has adapted DNNs to resource-constrained devices, under intermittent power, compressed models will still experience multiple power failures during a single inference. Footprint-based approaches enable hardware-accelerated intermittent DNN inference by tracking footprints, independent of model computations, to indicate accelerator progress across power cycles. However, we observe that the extra overhead required to preserve progress indicators can severely offset the computation progress accumulated by intermittent DNN inference. This work proposes the concept of model augmentation to adapt DNNs to intermittent devices. Our middleware stack, JAPARI, appends extra neural network components into a given DNN, to enable the accelerator to intrinsically integrate progress indicators into the inference process, without affecting model accuracy. Their specific positions allow progress indicator preservation to be piggybacked onto output feature preservation to amortize the extra overhead, and their assigned values ensure uniquely distinguishable progress indicators for correct inference recovery upon power resumption. Evaluations on a Texas Instruments device under various DNN models, capacitor sizes, and progress preservation granularities show that JAPARI can speed up intermittent DNN inference by 3× over the state of the art, for common convolutional neural architectures that require heavy acceleration.
- Research Article
- 10.1186/s12880-022-00854-x
- Jul 14, 2022
- BMC Medical Imaging
- Jiawei Fan + 6 more
BackgroundCurrent medical image translation is implemented in the image domain. Considering the medical image acquisition is essentially a temporally continuous process, we attempt to develop a novel image translation framework via deep learning trained in video domain for generating synthesized computed tomography (CT) images from cone-beam computed tomography (CBCT) images.MethodsFor a proof-of-concept demonstration, CBCT and CT images from 100 patients were collected to demonstrate the feasibility and reliability of the proposed framework. The CBCT and CT images were further registered as paired samples and used as the input data for the supervised model training. A vid2vid framework based on the conditional GAN network, with carefully-designed generators, discriminators and a new spatio-temporal learning objective, was applied to realize the CBCT–CT image translation in the video domain. Four evaluation metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC), and structural similarity (SSIM), were calculated on all the real and synthetic CT images from 10 new testing patients to illustrate the model performance.ResultsThe average values for four evaluation metrics, including MAE, PSNR, NCC, and SSIM, are 23.27 ± 5.53, 32.67 ± 1.98, 0.99 ± 0.0059, and 0.97 ± 0.028, respectively. Most of the pixel-wise hounsfield units value differences between real and synthetic CT images are within 50. The synthetic CT images have great agreement with the real CT images and the image quality is improved with lower noise and artifacts compared with CBCT images.ConclusionsWe developed a deep-learning-based approach to perform the medical image translation problem in the video domain. Although the feasibility and reliability of the proposed framework were demonstrated by CBCT–CT image translation, it can be easily extended to other types of medical images. The current results illustrate that it is a very promising method that may pave a new path for medical image translation research.
- Research Article
12
- 10.1109/tnse.2022.3165472
- Jul 1, 2022
- IEEE Transactions on Network Science and Engineering
- Emna Baccour + 4 more
Although Deep Neural Networks (DNN) have become the backbone technology of several ubiquitous applications, their deployment in resource-constrained machines, e.g., Internet of Things (IoT) devices, is still challenging. To satisfy the resource requirements of such a paradigm, collaborative deep inference with IoT synergy was introduced. However, the distribution of DNN networks suffers from severe data leakage. Various threats have been presented, including black-box attacks, where malicious participants can recover arbitrary inputs fed into their devices. Although many countermeasures were designed to achieve privacy-preserving DNN, most of them result in additional computation and lower accuracy. In this paper, we present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy, without sacrificing the model performance. Particularly, we examine different DNN partitions that make the model susceptible to black-box threats and we derive the amount of data that should be allocated per device to hide proprieties of the original input. We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data. Next, to relax the optimal solution, we shape our approach as a Reinforcement Learning (RL) design that supports heterogeneous devices as well as multiple DNNs/datasets.
- Research Article
3
- 10.1080/17489725.2021.2017495
- Mar 31, 2022
- Journal of Location Based Services
- Jerome Dreyer + 4 more
ABSTRACT In many countries, informed consent is required before a service provider can collect personal data from a user. For location-based services (LBS), this applies in particular to personal location information, which can enable deep inferences about a person. In this paper, we present a systematic analysis of how informed consent for the collection of personal location information is obtained in 40 popular LBS on each of the two largest app stores. Two independent raters assessed the content, structure and design of the dialogues shown by apps to obtain consent from users. Based on their assessment, we identified common approaches used across and within different app categories and platforms, including the frequent use of ‘dark patterns’. We highlight key issues arising from these common designs, discuss specific gaps in the procedure of obtaining informed consent and propose improvements to that procedure. In addition, we consider current practice in the context of enabling digital sovereignty with respect to personal location information. Our findings can shape the design and evaluation of informed consent procedures for future LBS in research and practice.
- Research Article
- 10.1371/journal.pcbi.1009890.r004
- Mar 11, 2022
- PLoS Computational Biology
- Amédée Roy + 3 more
At-sea behaviour of seabirds have received significant attention in ecology over the last decades as it is a key process in the ecology and fate of these populations. It is also, through the position of top predator that these species often occupy, a relevant and integrative indicator of the dynamics of the marine ecosystems they rely on. Seabird trajectories are recorded through the deployment of GPS, and a variety of statistical approaches have been tested to infer probable behaviours from these location data. Recently, deep learning tools have shown promising results for the segmentation and classification of animal behaviour from trajectory data. Yet, these approaches have not been widely used and investigation is still needed to identify optimal network architecture and to demonstrate their generalization properties. From a database of about 300 foraging trajectories derived from GPS data deployed simultaneously with pressure sensors for the identification of dives, this work has benchmarked deep neural network architectures trained in a supervised manner for the prediction of dives from trajectory data. It first confirms that deep learning allows better dive prediction than usual methods such as Hidden Markov Models. It also demonstrates the generalization properties of the trained networks for inferring dives distribution for seabirds from other colonies and ecosystems. In particular, convolutional networks trained on Peruvian boobies from a specific colony show great ability to predict dives of boobies from other colonies and from distinct ecosystems. We further investigate accross-species generalization using a transfer learning strategy known as ‘fine-tuning’. Starting from a convolutional network pre-trained on Guanay cormorant data reduced by two the size of the dataset needed to accurately predict dives in a tropical booby from Brazil. We believe that the networks trained in this study will provide relevant starting point for future fine-tuning works for seabird trajectory segmentation.