Optimized Hyperdimensional Edge AI Evaluation for Efficiency and Reliability under Real Radiation
Hyperdimensional Computing (HDC) is an emerging AI algorithm, touted to be an efficient, neuro-inspired and reliable alternative to neural networks for Edge AI. HDC utilizes hypervectors with several thousand elements; the number of elements in these hypervectors denotes the HDC dimension. This dimension can be optimized for improving the efficiency and reliability of HDC inference against errors such as bit-flips, which can be caused by environmental radiation-induced soft errors. We hypothesize that, by reducing the runtime chip area and execution time utilized by HDC inference through lowering dimensionality, both efficiency and reliability against soft error-induced bit-flips can be simultaneously improved while trading off a negligible amount of accuracy and error threshold. We tested our hypothesis by executing an HDC inference algorithm with two different dimension values, 10000 (10k) and 1024, on a commercially available, low-power, bare-metal ARM platform with a Cortex-M4 processor. We conducted the efficiency analysis by measuring the CPU cycles and energy required for executing the algorithm, and the reliability analysis using real-world atmospheric-like neutron radiation from the ChipIr facility in Oxfordshire, UK. Analyses revealed that, by lowering the HDC dimension from 10k to 1024, the reliability of HDC inference against soft error-induced bit-flips was 3.5 times better and efficiency improved by more than 16 times. This innovative observation contrasts the prevailing understanding in the community that increasing the HDC dimension always improves robustness or reliability. To the best of our knowledge, our work is the first to study the reliability of HDC inference using real-world radiation.
- Conference Article
- 10.1109/vlsi-tsa/vlsi-dat57221.2023.10134289
- Apr 17, 2023
This paper overviews the neuromorphic computing with Computation-in-Memory (CiM). CiM effectively process the MAC (Multiply Accumulate Calculation) of various neural networks, spiking neural network, reservoir computing, hyper dimensional computing and simulated annealing. These neuromorphic systems tolerate errors of memory cells and low bit precision is acceptable as approximate computing. This paper presents how the CiM can realize energy efficient neuromorphic system especially at edge AI.
- Conference Article
1
- 10.23919/date58400.2024.10546832
- Mar 25, 2024
Frontiers in Edge AI with RISC-V: Hyperdimensional Computing vs. Quantized Neural Networks
- Conference Article
- 10.1115/msec2023-105170
- Jun 12, 2023
While hybrid additive manufacturing offers improved material properties, increased design flexibility, and reduced production time, the presence of variation in the process (e.g., torque, current, power, and tool speed), together with the impact of tool degradation (e.g., spindles, holders, and cutters) alters the surface roughness and dimensional accuracy of fabricated parts. Recently, edge sensors are coupled with machine learning techniques (e.g., feature-based support vector machines and end-to-end deep neural networks) to connect process parameters with build quality. However, the lack of interpretable learning, poor sample efficiency, and low accuracy and precision limit their capabilities for reliable analysis. This paper introduces hyperdimensional computing (HDC) to fuse load, current, torque, command speed, control differential, power, and contour deviation which provides robust, sample-efficient, and explainable learning of quality characterization. Experimental results on a real-world hybrid 5-axis CNC Deckel-Maho-Gildemeister (DMG) machine show that HDC achieves a superior 90.5% accuracy in prediction of deviation for a 25.4 mm counterbore feature using multichannel data. Comparing the difference in results with ML methods such as support vector machines, logistic regression, multinomial Naive-Bayes, multilayer perceptron, and residual neural network, HDC outperforms by 55.8%, 28.6%, 53.1%, 11.6%, and 28.0%, respectively. The proposed HDC is shown to be effective for data fusion and training with relatively few iterations and eliminates the necessity of costly and long retraining in various manufacturing processes such as 3D printing and bio-fabrication.
- Research Article
8
- 10.1109/tbme.2024.3377270
- Aug 1, 2024
- IEEE transactions on bio-medical engineering
Sleep apnea syndrome (SAS) is a common sleep disorder, which has been shown to be an important contributor to major neurocognitive and cardiovascular sequelae. Considering current diagnostic strategies are limited with bulky medical devices and high examination expenses, a large number of cases go undiagnosed. To enable large-scale screening for SAS, wearable photoplethysmography (PPG) technologies have been used as an early detection tool. However, existing algorithms are energy-intensive and require large amounts of memory resources, which are believed to be the major drawbacks for further promotion of wearable devices for SAS detection. In this paper, an energy-efficient method of SAS detection based on hyperdimensional computing (HDC) is proposed. Inspired by the phenomenon of chunking in cognitive psychology as a memory mechanism for improving working memory efficiency, we proposed a one-dimensional block local binary pattern (1D-BlockLBP) encoding scheme combined with HDC to preserve dominant dynamical and temporal characteristics of pulse rate signals from wearable PPG devices. Our method achieved 70.17 % accuracy in sleep apnea segment detection, which is comparable with traditional machine learning methods. Additionally, our method achieves up to 67× lower memory footprint, 68× latency reduction, and 93× energy saving on the ARM Cortex-M4 processor. The simplicity of hypervector operations in HDC and the novel 1D-BlockLBP encoding effectively preserve pulse rate signal characteristics with high computational efficiency. This work provides a scalable solution for long-term home-based monitoring of sleep apnea, enhancing the feasibility of consistent patient care.
- Research Article
3
- 10.1002/qre.3498
- Feb 1, 2024
- Quality and Reliability Engineering International
A reliability analysis and evaluation model based on event chain and Bayesian information fusion is proposed for the characteristics of complex space phased‐mission systems (SPMSs), the implementation process of integrated reliability modeling is detailed, and the identification of weak links and reliability evaluation is carried out with the entry, descent, and landing process of Tianwen‐1 Mars Probe as an example. The application results show that the reliability analysis and evaluation model based on event chain and Bayesian information fusion has good engineering applicability, and the method can be extended and applied to the reliability modeling, analysis, and evaluation of complex SPMSs.
- Conference Article
2
- 10.1109/dac56929.2023.10247820
- Jul 9, 2023
Lightning Talk: Private and Secure Edge AI with Hyperdimensional Computing
- Research Article
- 10.3390/smartcities8060211
- Dec 16, 2025
- Smart Cities
Smart cities seek to improve urban living by embedding advanced technologies into infrastructures, services, and governance. Edge Artificial Intelligence (Edge AI) has emerged as a critical enabler by moving computation and learning closer to data sources, enabling real-time decision-making, improving privacy, and reducing reliance on centralized cloud infrastructure. This survey provides a comprehensive review of the foundations, challenges, and opportunities of edge AI in smart cities. In particular, we begin with an overview of layer-wise designs for edge AI-enabled smart cities, followed by an introduction to the core components of edge AI systems, including applications, sensing data, models, and infrastructure. Then, we summarize domain-specific applications spanning manufacturing, healthcare, transportation, buildings, and environments, highlighting both the softcore (e.g., AI algorithm design) and the hardcore (e.g., edge device selection) in heterogeneous applications. Next, we analyze the sources of sensing data generation, model design strategies, and hardware infrastructure that underpin edge AI deployment. Building on these, we finally identify several open challenges and provide future research directions in this domain. Our survey outlines a future research roadmap to advance edge AI technologies, thereby supporting the development of adaptive, harmonic, and sustainable smart cities.
- Research Article
2
- 10.1360/sspma2016-00521
- Jul 28, 2017
- SCIENTIA SINICA Physica, Mechanica & Astronomica
With the increasing complexity and large size of modern advanced engineering systems, the traditional reliability analysis and evaluation technology which is based on large number of sample data cannot meet the demand of complex system. Aiming at the engineering application requirement, this paper focuses on the reliability modeling and analysis of complex system with uncertainties and failure dependencies. Due to the diversity of input information and the system failure factors, and system redundancies, the uncertainty and common cause failure (CCF) have become the most important factors for reliability analysis and evaluation of complex system. In consideration of the epistemic uncertainty caused by lack of probability statistical information, the fuzzy theory is employed to express the fuzzy information of system, and the basic events failure probabilities are described by interval-valued fuzzy numbers. Taking account of the influence of CCF to system reliability and the widespread presence of MSS in engineering practices, the CCF is quantified by the β factor parameter model and integrated to Bayesian Network (BN) model through a new defined common cause node. Finally, a comprehensive method for reliability modeling and assessment of a multi-state system (MSS) with CCFs based on interval-valued fuzzy BN is proposed by taking the advantage of graphic representation and uncertainty reasoning of BN. The method has applied to the transmission system of two-axis positioning mechanism of a satellite antenna to demonstrate its effectiveness and capability for directly calculating the system reliability on the basis of multi-state probabilities of components. It has shown that the method proposed has done further improvement of the theory for reliability analysis of complex system and can realize its engineering application.
- Research Article
2
- 10.1155/2022/5420772
- Jun 11, 2022
- Journal of Sensors
Compared with the previous accounting information system (hereinafter referred to as AIS), the dynamic and changing environment of accounting cloud service, cloud storage away from enterprise entities, service modules selected for purchase, and seamless dynamic configuration. With the emergence of new situations such as the reconstruction of accounting information processing process, the emergence of new features increases the information risk of enterprises. Therefore, taking reasonable and effective measures can enable enterprises to intuitively understand whether AIS is credible in the accounting cloud service environment. Referring to the existing research system in the field of reliability evaluation, this paper analyzes the current situation of accounting cloud service and its characteristics compared with the previous AIS and divides it into four parts: normative inspection, index calculation, and reliability calculation to illustrate the method system for measuring the reliability of accounting cloud service. This paper analyzes the reliability requirements and reliability attributes of accounting cloud services and constructs a reliability evaluation grade model combined with fuzzy comprehensive evaluation to guide the selection of users and the quality management of cloud accounting suppliers. Considering the complexity and dynamics of AIS reliability evaluation in accounting cloud service environment, the reliability of AIS is also affected by the complex call relationship between modules; combined with the complex network theory, a reliability analysis and evaluation method of accounting cloud service based on complex network are proposed.
- Conference Article
1
- 10.1109/qrs-c55045.2021.00026
- Dec 1, 2021
As a safety-critical system, the reliable operation of smart grids is crucial to economic prosperity and social stability, and reliability analysis and evaluation is one of the effective means of achieving this goal. With smart grids continue to grow in size and complexity, complex network analysis are of useful to understand salient properties of complex systems by modeling smart grids as network system. In light of this, many studies analyze the reliable degree of smart grid from multiple scales reliability, vulnerability, resilience, stability, robustness, survivability, etc. Whereas, those concepts are both different and similar to each other, which tend to be confusing for beginners. This paper holds that reliability, vulnerability and resilience are three important concepts which can representatively describe the reliability level of network of smart grid during and after perturbation, and aims to provide a focused overview of complex networks-based reliability, vulnerability, and resilience analysis for smart grid. We hope this survey will bridge academic researchers and industry engineers in adopting appropriate issues for possibly depth future cooperation.
- Research Article
3
- 10.1016/j.sysarc.2024.103216
- Jun 28, 2024
- Journal of Systems Architecture
Integrated analysis of reliability, power, and performance for IoT devices and servers
- Conference Article
- 10.1115/imece1995-1442
- Nov 12, 1995
The IDEF methodology has been extensively used for modeling processes. Qualitative and quantitative reliability analysis and risk assessment of IDEF models is of interest to industry for several reasons. It identifies critical activities in a process, improves the process performance, and decreases downtime and operating cost of the process. To evaluate the reliability and risk associated with an IDEF3 model formal tools and techniques are required. This paper extends the system reliability evaluation techniques, i.e., the system reduction approach and minimum path and cut sets method for reliability evaluation of IDEF3 models. Representation of IDEF3 models as reliability graphs, generation of minimal path and cut sets of IDEF3 models with a path tree algorithm, and reliability analysis of IDEF3 models are the issues discussed in this paper. An algorithm for computing reliability of an IDEF3 model from a path set - activity incidence matrix is also presented. In addition, the fault tree analysis technique and minimum cut and path sets generation algorithms are applied for reliability evaluation and risk assessment of the parent activities in an IDEF3 model. A structural and reliability importance measure for parent activities in an IDEF3 model as well as for the elementary activities in a decomposed model are presented.
- Research Article
5
- 10.1016/j.gloei.2022.10.004
- Oct 1, 2022
- Global Energy Interconnection
Reliability and sensitivity analysis of loop-designed security and stability control system in interconnected power systems
- Book Chapter
8
- 10.1007/978-3-319-51343-0_20
- Jan 1, 2017
The ever increasing energy demand , unsustainable nature of fossil fuels , and environmental factors led scientists and researchers to explore electric power generation on the consumer terminal (distributed generation). This has resulted in emergence of microgrid, a combination of renewable with non-renewable energy resources . Microgrids are becoming popular nowadays because of its low cost and high energy efficiency. But the reliability analysis and reliability improvement of microgrid are still a major concern for scientists and researchers because of the fluctuating nature of renewable energy sources. This chapter proposes the different methods which can be utilized for reliability and adequacy analysis of a hybrid microgrid. Monte Carlo simulation, fault tree analysis, and Bayesian network are the most popular methods, which have the capacity to deal with the uncertainties involved in the case of microgrid. The effect of extreme weather condition on different reliability indices is also analyzed in this chapter. Further, the different techniques for reliability improvement of microgrid are discussed in this chapter.
- Single Book
41
- 10.1016/c2009-0-10169-1
- Jan 1, 1992
Reliability Analysis and Prediction - A Methodology Oriented Treatment
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.