Performance Enhancement of DC Motor Drive for Electric Vehicle Application by Using Deep Neural Network

  • Abstract
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Performance Enhancement of DC Motor Drive for Electric Vehicle Application by Using Deep Neural Network

Similar Papers
  • Research Article
  • Cite Count Icon 3
  • 10.3150/22-bej1553
Deep stable neural networks: Large-width asymptotics and convergence rates
  • Aug 1, 2023
  • Bernoulli
  • Stefano Favaro + 2 more

In modern deep learning, there is a recent and growing literature on the interplay between large-width asymptotics for deep Gaussian neural networks (NNs), i.e. deep NNs with Gaussian-distributed weights, and classes of Gaussian stochastic processes (SPs). Such an interplay has proved to be critical in several contexts of practical interest, e.g. Bayesian inference under Gaussian SP priors, kernel regression for infinite-wide deep NNs trained via gradient descent, and information propagation within infinite-wide NNs. Motivated by empirical analysis, showing the potential of replacing Gaussian distributions with Stable distributions for the NN's weights, in this paper we investigate large-width asymptotics for (fully connected) feed-forward deep Stable NNs, i.e. deep NNs with Stable-distributed weights. First, we show that as the width goes to infinity jointly over the NN's layers, a suitable rescaled deep Stable NN converges weakly to a Stable SP whose distribution is characterized recursively through the NN's layers. Because of the non-triangular NN's structure, this is a non-standard asymptotic problem, to which we propose a novel and self-contained inductive approach, which may be of independent interest. Then, we establish sup-norm convergence rates of a deep Stable NN to a Stable SP, quantifying the critical difference between the settings of ``joint growth and ``sequential growth of the width over the NN's layers. Our work extends recent results on infinite-wide limits for deep Gaussian NNs to the more general deep Stable NNs, providing the first result on convergence rates for infinite-wide deep NNs.

  • Research Article
  • Cite Count Icon 151
  • 10.1142/s0219530518500124
Deep distributed convolutional neural networks: Universality
  • Nov 1, 2018
  • Analysis and Applications
  • Ding-Xuan Zhou

Deep learning based on structured deep neural networks has provided powerful applications in various fields. The structures imposed on the deep neural networks are crucial, which makes deep learning essentially different from classical schemes based on fully connected neural networks. One of the commonly used deep neural network structures is generated by convolutions. The produced deep learning algorithms form the family of deep convolutional neural networks. Despite of their power in some practical domains, little is known about the mathematical foundation of deep convolutional neural networks such as universality of approximation. In this paper, we propose a family of new structured deep neural networks: deep distributed convolutional neural networks. We show that these deep neural networks have the same order of computational complexity as the deep convolutional neural networks, and we prove their universality of approximation. Some ideas of our analysis are from ridge approximation, wavelets, and learning theory.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 14
  • 10.3389/frai.2020.00049
An Interactive Visualization for Feature Localization in Deep Neural Networks
  • Jul 23, 2020
  • Frontiers in Artificial Intelligence
  • Martin Zurowietz + 1 more

Deep artificial neural networks have become the go-to method for many machine learning tasks. In the field of computer vision, deep convolutional neural networks achieve state-of-the-art performance for tasks such as classification, object detection, or instance segmentation. As deep neural networks become more and more complex, their inner workings become more and more opaque, rendering them a “black box” whose decision making process is no longer comprehensible. In recent years, various methods have been presented that attempt to peek inside the black box and to visualize the inner workings of deep neural networks, with a focus on deep convolutional neural networks for computer vision. These methods can serve as a toolbox to facilitate the design and inspection of neural networks for computer vision and the interpretation of the decision making process of the network. Here, we present the new tool Interactive Feature Localization in Deep neural networks (IFeaLiD) which provides a novel visualization approach to convolutional neural network layers. The tool interprets neural network layers as multivariate feature maps and visualizes the similarity between the feature vectors of individual pixels of an input image in a heat map display. The similarity display can reveal how the input image is perceived by different layers of the network and how the perception of one particular image region compares to the perception of the remaining image. IFeaLiD runs interactively in a web browser and can process even high resolution feature maps in real time by using GPU acceleration with WebGL 2. We present examples from four computer vision datasets with feature maps from different layers of a pre-trained ResNet101. IFeaLiD is open source and available online at https://ifealid.cebitec.uni-bielefeld.de.

  • Research Article
  • Cite Count Icon 117
  • 10.1186/s12911-020-01299-4
Stress detection using deep neural networks
  • Dec 1, 2020
  • BMC Medical Informatics and Decision Making
  • Russell Li + 1 more

BackgroundOver 70% of Americans regularly experience stress. Chronic stress results in cancer, cardiovascular disease, depression, and diabetes, and thus is deeply detrimental to physiological health and psychological wellbeing. Developing robust methods for the rapid and accurate detection of human stress is of paramount importance.MethodsPrior research has shown that analyzing physiological signals is a reliable predictor of stress. Such signals are collected from sensors that are attached to the human body. Researchers have attempted to detect stress by using traditional machine learning methods to analyze physiological signals. Results, ranging between 50 and 90% accuracy, have been mixed. A limitation of traditional machine learning algorithms is the requirement for hand-crafted features. Accuracy decreases if features are misidentified. To address this deficiency, we developed two deep neural networks: a 1-dimensional (1D) convolutional neural network and a multilayer perceptron neural network. Deep neural networks do not require hand-crafted features but instead extract features from raw data through the layers of the neural networks. The deep neural networks analyzed physiological data collected from chest-worn and wrist-worn sensors to perform two tasks. We tailored each neural network to analyze data from either the chest-worn (1D convolutional neural network) or wrist-worn (multilayer perceptron neural network) sensors. The first task was binary classification for stress detection, in which the networks differentiated between stressed and non-stressed states. The second task was 3-class classification for emotion classification, in which the networks differentiated between baseline, stressed, and amused states. The networks were trained and tested on publicly available data collected in previous studies.ResultsThe deep convolutional neural network achieved 99.80% and 99.55% accuracy rates for binary and 3-class classification, respectively. The deep multilayer perceptron neural network achieved 99.65% and 98.38% accuracy rates for binary and 3-class classification, respectively. The networks’ performance exhibited a significant improvement over past methods that analyzed physiological signals for both binary stress detection and 3-class emotion classification.ConclusionsWe demonstrated the potential of deep neural networks for developing robust, continuous, and noninvasive methods for stress detection and emotion classification, with the end goal of improving the quality of life.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/iciibms.2015.7439548
The 3-dimensional medical image recognition of right and left kidneys by deep GMDH-type neural network
  • Nov 1, 2015
  • Tadashi Kondo + 2 more

In this study, the deep multi-layered Group Method of Data Handling (GMDH)-type neural network algorithm using principal component-regression analysis is applied to recognition problems of the right and left kidney regions. The deep multi-layered GMDH-type neural network algorithm can automatically organize the deep neural network architectures which have many hidden layers and these deep neural networks can identify the characteristics of very complex nonlinear systems. The architecture of the deep neural network with many hidden layers is automatically organized using the heuristic self-organization method, so as to minimize the prediction error criterion defined as Akaike's information criterion (AIC) or Prediction Sum of Squares (PSS). The heuristic self-organization method is a type of the evolutional computation. In this deep GMDH-type neural network, principal component-regression analysis is used as the learning algorithm of the weights in the deep GMDH-type neural network, and multi-colinearity does not occur and stable and accurate prediction values are obtained. This new algorithm is applied to the medical image recognitions of the right and left kidney regions. The optimum neural network architectures, which fit the complexity of the right and left kidney regions, are automatically organized and the right and left kidney regions are automatically recognized and extracted by the organized deep GMDH-type neural networks. The recognition results are compared with the conventional sigmoid function neural network trained using back propagation method and it is shown that this deep GMDH-type neural networks are useful for the medical image recognition problems of the right and left kidney regions.

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.2305226
Understanding adversarial attack and defense towards deep compressed neural networks
  • May 3, 2018
  • Qi Liu + 2 more

Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting appli- cations such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of- the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely “adversarial examples”, causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.

  • Research Article
  • Cite Count Icon 42
  • 10.1016/j.neucom.2018.06.092
Adaptive deep dynamic programming for integrated frequency control of multi-area multi-microgrid systems
  • Feb 14, 2019
  • Neurocomputing
  • Linfei Yin + 3 more

Adaptive deep dynamic programming for integrated frequency control of multi-area multi-microgrid systems

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.4314/jasem.v27i11.35
Application of Deep Neural Network-Artificial Neural Network Model for Prediction Of Dew Point Pressure in Gas Condensate Reservoirs from Field-X in the Niger Delta Region Nigeria
  • Nov 28, 2023
  • Journal of Applied Sciences and Environmental Management
  • P U Abeshi + 4 more

Reservoirs of natural gas and gas condensate have been proposed as a potential for providing affordable and cleaner energy sources to the global population growth and industrialization expansion simultaneously. This work evaluates reservoir simulation for production optimization using Deep Neural network - artificial neural network (DNN-ANN) model to predict the dew point pressure in gas condensate reservoirs from Field-X in the Niger Delta Region of Nigeria. The dew-point pressure (DPP) of gas condensate reservoirs was estimated as a function of gas composition, reservoir temperature, molecular weight and specific gravity of heptane plus percentage. Results obtained show that the mean relative error (MRE) and R-squared (R2) are 0.99965 and 3.35%, respectively, indicating that the model is excellent in predicting DPP values. The Deep Neural Network - Artificial Neural Network (DNN-ANN) model is also evaluated in comparison to earlier models created by previous authors. It was recommended that the DNN - ANN model developed in this study could be applied to reservoir simulation and modeling well performance analysis, reservoir engineering problems and production optimization.

  • Book Chapter
  • Cite Count Icon 5
  • 10.1007/978-3-030-00015-8_23
Attack on Deep Steganalysis Neural Networks
  • Jan 1, 2018
  • Shiyu Li + 5 more

Deep neural networks (DNN) have achieved state-of-art performance on image classification and pattern recognition in recent years, and also show its power on steganalysis field. But research revealed that the DNN can be easily fooled by adversarial examples generated by adding perturbation to input. Deep steganalysis neural networks have the same potential threat as well. In this paper we discuss and analysis two different attack methods and apply the methods in attacking on deep steganalysis neural networks. We defined the model and propose the concrete attack steps, the result shows that the two methods have 96.02% and 90.25% success ratio separately on the target DNN. Thus, the adversarial example attack is valid for deep steganalysis neural networks.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.ins.2022.08.090
A convergence analysis of Nesterov’s accelerated gradient method in training deep linear neural networks
  • Sep 5, 2022
  • Information Sciences
  • Xin Liu + 2 more

A convergence analysis of Nesterov’s accelerated gradient method in training deep linear neural networks

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/sibircon56155.2022.10016932
Localization of Ontology Concepts in Deep Convolutional Neural Networks
  • Nov 11, 2022
  • Anton Agafonov + 1 more

With the proliferation of deep artificial neural networks, techniques allowing end users to understand why a network came to a certain conclusion are becoming increasingly important. The lack of such understanding is becoming a limiting factor in applying deep neural networks in critical tasks, where the price of error is high. Recently it has been shown that internal representations built by a deep neural network can sometimes be aligned with concepts of a domain ontology, related to the network target. This opens an opportunity of explaining the results of a deep neural network in human terms (defined in the ontology). The paper presents the results of several experiments aimed at understanding what layers of a neural network are most perspective for the alignment with given ontology concept (characterized by its relations with the network target). The experiments were performed with several datasets (XTRAINS, SCDB) and several network architectures (including custom convolutional neural network architecture, ResNet, MobileNetV2). For these dataset-neural architecture pairs we built "concept localization maps" showing how informative is the output of each layer for predicting that given sample corresponds to a certain concept. The results of the experiments show that the concepts that are "closer" to the target concept (definition-wise) are typically better expressed (or, localized) in the last layers. Besides, the concept expression typically follows a roughly unimodal shape. We believe that these results can be used for building effective algorithms for concept extraction and improve the ontology-based explanation techniques for deep neural networks.

  • Research Article
  • Cite Count Icon 68
  • 10.1016/j.eswa.2016.10.038
Growing random forest on deep convolutional neural networks for scene categorization
  • Oct 17, 2016
  • Expert Systems with Applications
  • Shuang Bai

Growing random forest on deep convolutional neural networks for scene categorization

  • Conference Article
  • Cite Count Icon 9
  • 10.1109/smartworld-uic-atc-scalcom-iop-sci.2019.00060
Prediction of Road Traffic Flow Based on Deep Recurrent Neural Networks
  • Aug 1, 2019
  • Zoe Bartlett + 3 more

Traffic congestion is a major issue for developed countries; therefore, research into the prediction of road traffic flow is vital. Deep neural networks, such as deep recurrent neural networks, are now being explored for road traffic flow prediction. However, what deep architecture is the most appropriate remains unanswered. Previous research into deep recurrent neural networks fails to compare them to other deep models; instead, comparisons are made with simple shallow models. To compound this issue, standard performance metrics assess a model's success solely on its accuracy. No consideration is given to computational cost. Furthermore, optimisation of a neural network's architecture can be difficult. There is no standard or analytical method to determine their correct structure. This often leads to sub-optimal architectures being used. Therefore, deep neural networks should be assessed on how sensitive the model is to architectural changes. In this paper, we have examined three recurrent neural networks (a standard recurrent, a long short-term memory, and a gated recurrent unit) to determine how they perform on time-series data based on a real dataset. We compared their accuracy, training time, and sensitivity to architectural change. Additionally, we developed a new performance metrics, Standardised Accuracy and Time Score (STATS), which standardises the accuracy and training time into a comparable score, allowing an overall score to be awarded. The experimental results show that, based on the STATS, the gated recurrent unit produced the highest overall performance and accuracy score. Furthermore, its prediction was most stable against architectural changes. Conversely, the long short-term memory was the least stable model.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/apsipaasc47483.2019.9023251
Speech Recognition Based on Deep Tensor Neural Network and Multifactor Feature
  • Nov 1, 2019
  • Yahui Shan + 5 more

This paper presents a speech recognition system based on deep tensor neural network which uses multifactor feature as input feature of acoustic model. First, a deep neural network is trained to estimate articulatory feature from input speech, where the training data is MOCHA database[1]. Mel frequency cepstrum coefficients in conjunction with articulatory feature are used as multifactor feature. Deep tensor neural network which involves tensor interactions among neurons is used as the acoustic model in this system. Speech recognition results indicate that the multifactor feature helps in improving speech recognition performance not only under clean conditions but also under noisy background conditions; deep tensor neural network is more capable of modeling multifactor features because of its tensor interactions than deep neural network.

  • Conference Article
  • Cite Count Icon 33
  • 10.1145/2983563.2983567
Multimodal and Crossmodal Representation Learning from Textual and Visual Features with Bidirectional Deep Neural Networks for Video Hyperlinking
  • Oct 16, 2016
  • Vedran Vukotić + 2 more

Video hyperlinking represents a classical example of multimodal problems. Common approaches to such problems are early fusion of the initial modalities and crossmodal translation from one modality to the other. Recently, deep neural networks, especially deep autoencoders, have proven promising both for crossmodal translation and for early fusion via multimodal embedding. A particular architecture, bidirectional symmetrical deep neural networks, have been proven to yield improved multimodal embeddings over classical autoencoders, while also being able to perform crossmodal translation. In this work, we focus firstly at evaluating good single-modal continuous representations both for textual and for visual information. Word2Vec and paragraph vectors are evaluated for representing collections of words, such as parts of automatic transcripts and multiple visual concepts, while different deep convolutional neural networks are evaluated for directly embedding visual information, avoiding the creation of visual concepts. Secondly, we evaluate methods for multimodal fusion and crossmodal translation, with different single-modal pairs, in the task of video hyperlinking. Bidirectional (symmetrical) deep neural networks were shown to successfully tackle downsides of multimodal autoencoders and yield a superior multimodal representation. In this work, we extensively tests them in different settings, with different single-modal representations, within the context of video-hyperlinking. Our novel bidirectional symmetrical deep neural networks are compared to classical autoencoders and are shown to yield significantly improved multimodal embeddings that significantly (alpha=0.0001) outperform multimodal embeddings obtained by deep autoencoders with an absolute improvement in precision at 10 of 14.1% when embedding visual concepts and automatic transcripts and an absolute improvement of 4.3% when embedding automatic transcripts with features obtained with very deep convolutional neural networks, yielding 80% of precision at 10.

More from: International Journal of Innovative Computing and Applications
  • Research Article
  • 10.1504/ijica.2025.145037
Fuzzy goal programming in the case of exponential membership functions with quasiconcave piecewise linear exponents
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Maged George Iskander

  • Research Article
  • 10.1504/ijica.2025.145025
ESIDLPD: design of an efficient exudate statistics-based incremental deep learning model to detect progression of diabetic retinopathy
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Laxmikant S Kalkonde + 3 more

  • Research Article
  • 10.1504/ijica.2025.10072008
Enhancing Decision Making with Soft Set Theory: a Novel Approach to Object Recognition from Imprecise Data
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Anirban Mitra + 2 more

  • Research Article
  • 10.1504/ijica.2025.145036
A semantic segmentation framework for liver and liver tumour segmentation
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Toureche Amina + 1 more

  • Research Article
  • 10.1504/ijica.2025.10069524
AI-Enhanced ECG Monitoring for Arrhythmia Detection Using Semantic LinkNet Deep Neural Network
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Satish Chander + 1 more

  • Research Article
  • 10.1504/ijica.2025.10071826
Performance Enhancement of DC Motor Drive for Electric Vehicle Application by Using Deep Neural Network
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Sandesh Patel + 3 more

  • Research Article
  • 10.1504/ijica.2025.10069249
Advancing Human Action Recognition: Wavelet-DTW Enhanced Deep Learning with Multi-Head Attention
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Soufiana Mekouar + 3 more

  • Research Article
  • 10.1504/ijica.2025.10069858
Fuzzy Goal Programming in the Case of Exponential Membership Functions with Quasiconcave Piecewise Linear Exponents
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Maged Iskander

  • Research Article
  • 10.1504/ijica.2025.10071162
Acoustic Analysis of Chronic Obstructive Pulmonary Disorder using Transfer Learning - a Three-Class Problem
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Thomas George S + 3 more

  • Research Article
  • 10.1504/ijica.2025.10069730
A Semantic Segmentation Framework for Liver and Liver Tumour Segmentation
  • Jan 1, 2025
  • International Journal of Innovative Computing and Applications
  • Hakim Bendjenna + 1 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon