A Shared Two-way Cybersecurity Model for Enhancing Cloud Service Sharing for Distributed User Applications

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Cloud services provide decentralized and pervasive access for resources to reduce the complex infrastructure requirements of the user. In decentralized service access, the implication of security is tedious to match the user requirements. Therefore, cloud services incorporate cybersecurity measures for administering standard resource access to users. In this paper, a shared two-way security model (STSM) is proposed to provide adaptable service security for the end-users. In this security model, a cooperative closed access session for information sharing between the cloud and end-user is designed with the help of cybersecurity features. This closed access provides less complex authentication for users and data that is capable of matching the verifications of the cloud services. A deep belief learning algorithm is used to differentiate the cooperative and non-cooperative secure sessions between the users and the cloud to ensure closed access throughout the data sharing time. The output of the belief network decides the actual session time between the user and the cloud, improving the span of the sharing session. Besides, the proposed model reduces false alarm, communication failures, under controlled complexity.

Similar Papers
  • Research Article
  • Cite Count Icon 3
  • 10.1155/2022/5780549
New Media Marketing Strategy Optimization in the Catering Industry Based on Deep Machine Learning Algorithms
  • Jan 1, 2022
  • Journal of Mathematics
  • Zikang Peng

With the in‐depth development of new‐generation network technologies such as the Internet, big data, and cloud intelligence, people can obtain massive amounts of information on mobile phones or mobile platforms. The era of unreachable big data has arrived, which raises questions for the development of corporate marketing. With the development of Internet technology, people use mobile terminals for longer and longer periods of time. New media has gradually become the mainstream of the media arena. It has distinctive features such as freedom to find audiences, diverse content forms, and timeliness of information release, which has changed the traditional. The marketing model has a profound impact on the development of the market. This article uses relevant theories, such as new media, marketing, and catering industry marketing strategies, studies the related concepts and characteristics of new media, clarifies the impact of the development of new media on the catering industry and audience groups, and studies the impact of the catering industry from multiple dimensions. Based on the development factors in the new media environment, combined with marketing theory, it puts forward suggestions for catering companies to use new media to carry out marketing planning in product innovation, improving information channels, creating network events and topics, and promoting innovation and health in the catering industry. And a marketing strategy is proposed based on deep machine learning algorithms; including a cloud server, the cloud server communicates with the e‐commerce software platform and the input of physical sales is recorded. The adopted cloud server is connected with data collection, data processing, and communication module. The communication module is connected with a deep machine learning algorithm system; that is, deep machine learning algorithm system is connected with a sales platform in communication. The sales platform is connected with advertising settings and advertising, and the advertising is electrically connected with an algorithm of advertising delivery methods. Advertisement delivery method algorithm communication is connected to the cloud server. This article uses deep machine learning algorithms to process the data information to make the data information easy to view and clear. The advertisement delivery method algorithm calculates the best way of advertising and then calculates the advertisement to deliver.

  • Research Article
  • 10.13052/jwe1540-9589.2453
Robust Cloud Service Ranking with Deep Learning and Multi-criteria Analysis
  • Aug 26, 2025
  • Journal of Web Engineering
  • Pooja Goyal + 1 more

With the rapid growth of cloud services, it is crucial to have strong assessment methods in place to rate these services according to their performance, dependability, and security. This study introduces a holistic methodology that utilizes advanced deep learning (DL) algorithms to prioritize and evaluate cloud services. Our model incorporates many assessment criteria, including latency, throughput, availability, and security measures. These criteria are trained using a varied collection of performance measurements from cloud services. We validate the effectiveness of our methodology by comprehensive experiments, attaining greater precision and significance in ranking compared to conventional approaches. The DL model underwent evaluation using a testing set, resulting in a mean absolute error (MAE) of 0.15 in ranking scores. The algorithm regularly achieved superior results compared to conventional ranking approaches, particularly in situations where performance measures varied. Through the incorporation of security metrics, the model successfully assessed and ranked cloud service providers (CSPs) based not only on their performance, but also on their ability to withstand security threats. The DL technique exhibited more flexibility and contextual awareness in its rankings, hence showcasing its superiority in adjusting to real-time data. The research conducted a comparison between DL-based rankings and conventional methodologies and industry standards, demonstrating its superiority in effectively adjusting to real-time data. The study technique entails gathering data from many CSPs to construct a resilient framework for evaluating cloud services using DL models. The data is obtained from publicly available performance statistics, cloud monitoring tools, user evaluations, and problem reports. The collection comprises both structured and unstructured data, including essential performance and accuracy indicators.

  • Research Article
  • Cite Count Icon 72
  • 10.1007/s13202-021-01087-4
Prediction performance advantages of deep machine learning algorithms for two-phase flow rates through wellhead chokes
  • Feb 23, 2021
  • Journal of Petroleum Exploration and Production
  • Hossein Shojaei Barjouei + 6 more

Two-phase flow rate estimation of liquid and gas flow through wellhead chokes is essential for determining and monitoring production performance from oil and gas reservoirs at specific well locations. Liquid flow rate (QL) tends to be nonlinearly related to these influencing variables, making empirical correlations unreliable for predictions applied to different reservoir conditions and favoring machine learning (ML) algorithms for that purpose. Recent advances in deep learning (DL) algorithms make them useful for predicting wellhead choke flow rates for large field datasets and suitable for wider application once trained. DL has not previously been applied to predict QL from a large oil field. In this study, 7245 multi-well data records from Sorush oil field are used to compare the QL prediction performance of traditional empirical, ML and DL algorithms based on four influencing variables: choke size (D64), wellhead pressure (Pwh), oil specific gravity (γo) and gas–liquid ratio (GLR). The prevailing flow regime for the wells evaluated is critical flow. The DL algorithm substantially outperforms the other algorithms considered in terms of QL prediction accuracy. The DL algorithm predicts QL for the testing subset with a root-mean-squared error (RMSE) of 196 STB/day and coefficient of determination (R2) of 0.9969 for Sorush dataset. The QL prediction accuracy of the models evaluated for this dataset can be arranged in the descending order: DL > DT > RF > ANN > SVR > Pilehvari > Baxendell > Ros > Glbert > Achong. Analysis reveals that input variable GLR has the greatest, whereas input variable D64 has the least relative influence on dependent variable QL.

  • Research Article
  • Cite Count Icon 1
  • 10.17485/ijst/2016/v9i47/101372
A Robust and Secure Lightweight Authentication for the Limited Resource Wireless Ad-hoc Network
  • Jan 20, 2016
  • Indian Journal of Science and Technology
  • Preet Kamal Sharma

Objectives: The proposed security model is specifically designed to increase the level of the security by implementing the node integrity verification and data encryption service over the wireless ad-hoc networks. Methods/Statistical Analysis: The proposed security and authentication model has been defined using the 8 bytes to 16 bytes length based variable key length authentication scheme, which is difficult to predict and cryptanalysis attacks, as there is no uniform mechanism to perform the cryptanalysis attack over the authentication data during the exchange or propagation in the wireless ad-hoc networks. Findings: In this paper, we have proposed the public cryptosystem based encryption for light and secure authentication scheme, which utilizes the RSA encryption as the public cryptosystems. The proposed security and authentication model has been deeply analyzed to protect against the variety of attacks using the primary parameter of authentication delay and data loss. This model has been analyzed over the three primary attacks for the resource jamming, which includes the Distributed Denial of Service (DDoS), selective jamming, and black hole attacks for the data dropping and jamming attacks. The proposed authentication and security model for wireless ad-hoc routing has been analyzed for its performance on the basis of the specific performance parameters. The robust and flexible performance of security and authentication based proposed model has been deeply observed from the versatile results obtained from the all paradigms of the simulation. Application/Improvements: The light weight and robust authentication based security mechanism has been proposed for the higher order suitability to serve the specific purpose of security against the attacks over the wireless networks.

  • Research Article
  • Cite Count Icon 102
  • 10.1155/2022/9023719
Intrusion Detection System for Industrial Internet of Things Based on Deep Reinforcement Learning
  • Jan 1, 2022
  • Wireless Communications and Mobile Computing
  • Sumegh Tharewal + 5 more

The Industrial Internet of Things has grown significantly in recent years. While implementing industrial digitalization, automation, and intelligence introduced a slew of cyber risks, the complex and varied industrial Internet of Things environment provided a new attack surface for network attackers. As a result, conventional intrusion detection technology cannot satisfy the network threat discovery requirements in today’s Industrial Internet of Things environment. In this research, the authors have used reinforcement learning rather than supervised and unsupervised learning, because it could very well improve the decision‐making ability of the learning process by integrating abstract thinking of complete understanding, using deep knowledge to perform simple and nonlinear transformations of large‐scale original input data into higher‐level abstract expressions, and using learning algorithm or learning based on feedback signals, in the lack of guiding knowledge, which is based on the trial‐and‐error learning model, from the interaction with the environment to find the best good solution. In this respect, this article presents a near‐end strategy optimization method for the Industrial Internet of Things intrusion detection system based on a deep reinforcement learning algorithm. This method combines deep learning’s observation capability with reinforcement learning’s decision‐making capability to enable efficient detection of different kinds of cyberassaults on the Industrial Internet of Things. In this manuscript, the DRL‐IDS intrusion detection system is built on a feature selection method based on LightGBM, which efficiently selects the most attractive feature set from industrial Internet of Things data; when paired with deep learning algorithms, it effectively detects intrusions. To begin, the application is based on GBM’s feature selection algorithm, which extracts the most compelling feature set from Industrial Internet of Things data; then, in conjunction with the deep learning algorithm, the hidden layer of the multilayer perception network is used as the shared network structure for the value network and strategic network in the PPO2 algorithm; and finally, the intrusion detection model is constructed using the PPO2 algorithm and ReLU (R). Numerous tests conducted on a publicly available data set of the Industrial Internet of Things demonstrate that the suggested intrusion detection system detects 99 percent of different kinds of network assaults on the Industrial Internet of Things. Additionally, the accuracy rate is 0.9%. The accuracy, precision, recall rate, F1 score, and other performance indicators are superior to those of the existing intrusion detection system, which is based on deep learning models such as LSTM, CNN, and RNN, as well as deep reinforcement learning models such as DDQN and DQN.

  • Research Article
  • Cite Count Icon 8
  • 10.1155/2022/3452176
Deep Learning Algorithm-Based Ultrasound Image Information in Diagnosis and Treatment of Pernicious Placenta Previa.
  • Jun 6, 2022
  • Computational and Mathematical Methods in Medicine
  • Xiao Yang + 2 more

This study was to explore the value of the deep dictionary learning algorithm in constructing a B ultrasound scoring system and exploring its application in the clinical diagnosis and treatment of pernicious placenta previa (PPP). 60 patients with PPP were divided into a low-risk group (severe, implantable) and high-risk group (adhesive, penetrating) according to their clinical characteristics, B ultrasound imaging characteristics, and postpartum pathological examination results. Under PPP ultrasonic image information using the deep learning algorithm, the B ultrasound image diagnostic scoring system was established to predict the depth of various types of placenta accreta. The results showed that the cut-off values of severe, implantable, adhesive, and penetrating types were <2.3, 2.3-6.5, 6.5-9, and ≥9 points, respectively; there were significant differences in the termination of pregnancy and neonatal birth weight between the two groups (P < 0.05); the positive predictive value, negative predictive value, and false positive rate of ultrasound images based on the deep dictionary learning algorithm for PPP were 95.33%, 94.89%, and 3.56%, respectively. Thus, the ultrasound image diagnostic scoring system based on the deep learning algorithm has an important predictive role for PPP, which can provide a more targeted diagnosis and treatment plan for patients in clinical practice and improve the prediction and treatment efficiency.

  • Research Article
  • Cite Count Icon 1
  • 10.1093/eurheartj/ehab724.3069
ACS mortality prediction in Asian in-hospital patients with deep learning using machine learning feature selection
  • Oct 12, 2021
  • European Heart Journal
  • S Kasim + 4 more

Background Thrombolysis in Myocardial infarction (TIMI) is used in predicting the mortality rate of the acute coronary syndrome (ACS) patients. TIMI was developed based on the Western cohort with limited data on the Asian cohort. There are separate TIMI scores for STEMI and NSTEMI. Deep learning (DL) and machine learning (ML) algorithms such as support vector machine (SVM) in population-specific dataset resulted in a higher area under the curve (AUC) to TIMI. The limitation of DL is selected features by the algorithm is unknown compared to ML algorithms. Purpose To construct a single in-hospital mortality risk scoring system that combines SVM feature importance and the DL algorithm in ASIAN patients with ACS that is applicable for both STEMI and NSTEMI patients. To investigate DL performance constructed using predictors selected from SVM feature extraction and DL using complete features and compare with TIMI risk score for STEMI and NSTEMI patients. Methods We constructed four algorithms: i) DL and SVM algorithm with feature selected from SVM variable importance, ii) DL and SVM algorithm without feature selection. SVM feature importance with the backward elimination method is used to select and rank important variables. We used registry data from the National Cardiovascular Disease Database of 13190 patient's data. Fifty-four parameters including demographics, cardiovascular risk, medications and clinical variables were considered. AUC was used as the performance evaluation metric. All algorithms were validated using validation dataset and compared to the conventional TIMI for STEMI and NSTEMI. Results Validation results in Figure 1 are by STEMI and NTEMI patients. Both DL algorithms outperformed ML and TIMI score on validation data. Similar performance is observed for DL and SVM algorithms using all predictors (54 predictors) with DL and SVM algorithm using selected predictors (14 predictors). Predictors selected by the SVM feature selection are: age, heart rate, Killip class, fasting blood glucose, ST-elevation, CABG, cardiac catheterization, angina episode, HDLC, LDC, other lipid-lowering agents, statin, anti-arrhythmic agent, oralhypogly. CABG and pharmacotherapy drugs as selected predictors improve mortality prediction compared to TIMI score. In DL, 25.87% of STEMI patients and 19.71% of NSTEMI patients are estimated as high risk (risk probabilities of &amp;gt;50%). TIMI underestimated the risk of mortality of high-risk patients (≥5 risk scores) with 13.08% from STEMI patients and 4.65% from NSTEMI patients (Figure 2). Conclusions In the ASIAN multi-ethnicity population, patients with ACS can be better classified using one single algorithm compared to the conventional method like TIMI which requires two different scores. Combining ML feature selection with DL allows the identification of distinct factors related to in-hospital mortality of ACS patients in a unique ASIAN population for better mortality prediction. Funding Acknowledgement Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Technology Development Fund 1 Figure 1. Performance resultsFigure 2. Analysis on the validation set

  • Research Article
  • Cite Count Icon 13
  • 10.1007/s12065-019-00284-9
Deep learning based dynamic task offloading in mobile cloudlet environments
  • Sep 13, 2019
  • Evolutionary Intelligence
  • D Shobha Rani + 1 more

The mobile computing world is migrating from 4G to 5G and one of the major offering of 5G is the seamless computing power and it is the major set back in the current scenario. The major difficulties that need to be addressed are computing, quality of services. Speed, power and security. This research paper aims in addressing the issue of task management in the mobile systems that is directly related to quality. The article proposes a deep learning-based algorithm that performs dynamic task offloading in the mobile cloudlet since cloudlet aids in the reduction of the delay that occur in the WLAN. The delay in performing tasks is one of the major drawback of cloudlet that it is deprived of resources when compared to cloud server due to which the tasks that are to be performed are divided and is designated to mobile devices, different cloud servers and cloudlet itself. Therefore, to determine the combination of devices required to perform different tasks, deep learning algorithms are considered. The algorithm is responsible to identify the subtasks, the subtasks that has to be computed/executed in which device or cloudlet or cloud server. The proposed algorithm is named Deep Learning based Dynamic Task Offloading in Mobile Cloudlet (DLDTO). The algorithm is implemented and compared with Cloudlet based Dynamic Task Offloading (CDTO). The overall analysis and comparison with the existing CDTO for job allocation proved that the performance of the proposed DLDTO algorithm is better in terms of energy consumption and completion time.

  • Research Article
  • Cite Count Icon 2
  • 10.11591/ijres.v14.i1.pp291-300
Development of internet of vehicles and recurrent neural network enabled intelligent transportation system for smart cities
  • Mar 1, 2025
  • International Journal of Reconfigurable and Embedded Systems (IJRES)
  • Jyoti Surve + 7 more

The number of deaths has increased as a direct result of the increased frequency of traffic accidents, congestion, and other risk factors. Developing countries have prioritised the development of intelligent transport systems in order to reduce pollution, traffic congestion, and wasted time. This article describes an intelligent transport system that leverages the internet of vehicles (IoV) and deep learning to forecast traffic congestion. Data is acquired using a car’s global positioning system (GPS), road and vehicle sensors, traffic cameras, and traffic speed, density, and flow. All acquired data is stored in one location on a cloud server. The cloud server also stores historical traffic, road, and vehicle data. Using particle swarm optimisation, features are improved. The optimised dataset is used to train and test recurrent neural networks (RNNs), support vector machines (SVMs), and multi layer perceptrons (MLPs). A deep learning algorithm can predict traffic congestion and make recommendations to drivers on how fast to travel and which route to take. The experimental effort employs the performance measurement system (PeMS) traffic dataset. RNN has achieved accuracy of 95.1%.

  • Research Article
  • Cite Count Icon 1
  • 10.54097/70266446
Synergies and Challenges in the Integration of Cloud Computing and Deep Learning: Current Status, Interconnectedness, and Future Directions
  • May 28, 2024
  • Highlights in Science, Engineering and Technology
  • Zihang Yang

This article reviews the status and recent developments in the integration of cloud computing and deep learning, as well as the interrelationship between these two technologies. The paper explores the intersection of cloud computing and deep learning in addressing cybersecurity challenges. Amidst the rapid expansion of the worldwide public cloud services market, the vulnerability to cyber-attacks and breaches in data management is on the rise. Different intrusion detection systems use different deep learning techniques to improve the effectiveness of intrusion detection in cloud computing environments. Additionally, the use of encryption technology and the corresponding deep learning retrieval technology further improves the security of cloud data. Moreover, the paper deeply studies how the scheduling mechanism of deep reinforcement learning can optimize the performance of cloud services by efficiently allocating resources and solving the problem of slow cloud service speed. It also derives the optimal energy strategy through deep neural networks to address the energy consumption challenges in cloud computing data centers. This article also reviews the five emerging architectures of cloud computing and explores the role of deep learning within these frameworks. Finally, it analyzes some of the challenges facing the future of cloud computing and deep learning, including the security and confidentiality of cloud computing, as well as low latency and high throughput optimization in the field of deep learning. In summary, this article provides insight into current trends, challenges, and future prospects for the evolving integration between cloud computing and deep learning.

  • Research Article
  • 10.48175/ijarsct-14385
Green Computing with Deep Learning for Data Centers
  • Dec 31, 2023
  • International Journal of Advanced Research in Science, Communication and Technology
  • Doni Kavya

Due to development in cloud services, lots of data is transferred between users and servers of the cloud. This transmission of data consumes huge amounts of energy. This energy consumption occurs during the operation of network infrastructure, the conversion of electrical to optical signals to travel long distances, and signal amplification. As Green computing is the use of computing devices in an environmentally friendly way, i.e., using electrical energy efficiently as Data centers require a significant amount of electricity to operate and cool the servers, leading to carbon emissions from the burning of fossil fuels. Green computing in cloud services is about optimizing energy consumption and by incorporating deep learning algorithms, we can enhance the energy efficiency of cloud infrastructure. These algorithms can analyze real-time data from sensors, optimize resource allocation, and dynamically adjust power usage. Through intelligent workload scheduling, server consolidation, and power management, deep learning enables the reduction of energy waste and carbon emissions. The integration of deep learning in cloud services not only improves energy efficiency but also enhances performance and cost-effectiveness. Here we are using a deep learning model which can be used for workload prediction and resource provisioning. By analyzing historical workload patterns and user behavior, deep learning algorithmscan predict future resource demands and allocate resources accordingly, leading to more efficient resource utilization and energy savings

  • Research Article
  • Cite Count Icon 1
  • 10.12182/20210660103
Application of Deep Learning Reconstruction Algorithm in Low-Dose Thin-Slice Liver CT of Healthy Volunteers
  • Sep 1, 2021
  • Sichuan da xue xue bao. Yi xue ban = Journal of Sichuan University. Medical science edition
  • Ling-Ming Zeng + 8 more

To explore the clinical feasibility of applying deep learning (DL) reconstruction algorithm in low-dose thin-slice liver CT examination of healthy volunteers by comparing the reconstruction algorithm based on DL, filtered back projection (FBP) reconstruction algorithm and iterative reconstruction (IR) algorithm. A standard water phantom with a diameter of 180 mm was scanned, using the 160 slice multi-detector CT scanning of United Imaging Healthcare, to compare the noise power spectrums of DL, FBP and IR algorithms. 100 healthy volunteers were prospectively enrolled, with 50 assigned to the normal dose group (ND) and 50 to the low dose group (LD). IR algorithm was used in the ND group to reconstruct images, while DL, FBP and IR algorithms were used in the LD group to reconstruct images. One-way analysis of variance was used to compare the liver CT values, the liver noise, liver signal-to-noise ratio (SNR), contrast noise ratio (CNR) and figure of merit (FOM) of the images of ND-IR, LD-FBP, LD-IR and LD-DL. The Kruskal-Wallis test was used to analyse subjective scores of anatomical structures. The DL algorithm had the lowest average peak value of noise power spectrum, and its shape was similar to that of medium-level IR algorithm. Liver CT values of ND-IR, LD-FBP, LD-IR and LD-DL did not show statistically significant difference. The noise of LD-DL was lower than that of LD-FBP, LD-IR and ND-IR ( P<0.05), and the SNR, CNR and FOM of LD-DL were higher than those of LD-FBP, LD-IR and ND-IR ( P<0.05). The subjective scores of anatomical structures of LD-DL did not show significant difference compared to those of ND-IR ( P >0.05), and were higher than those of LD-FBP and LD-IR. The radiation dose of the LD group was reduced by about 50.2% compared with that of the ND group. The DL algorithm with noise shape similar to the medium iterative grade IR commonly used in clinical practice showed higher noise reduction ability than IR did. Compared with FBP, the DL algorithm had smoother noise shape, but much better noise reduction ability. The application of DL algorithm in low-dose thin-slice liver CT of healthy volunteers can help achieve the standard image quality of liver CT.

  • Conference Article
  • Cite Count Icon 32
  • 10.1109/cloudtech.2016.7847682
Cloud security and privacy model for providing secure cloud services
  • May 1, 2016
  • Khalid El Makkaoui + 3 more

Cloud computing is becoming increasingly a magical solution and a widely adopted technology for delivering services over the Internet thanks to its diverse benefits, including services on demand, reducing costs, sharing and configuring computing resources, and high services scalability and flexibility. However, with the emergence of this technology, the concept of security and privacy has become a major barrier to cloud services adoption. Indeed, many research works have been done to identify cloud security and privacy issues. It is in this context that in this paper, we will provide a new cloud security and privacy model (CSPM) into layers which can be taken into account by cloud providers during all the stages of cloud services building and monitoring. This model will permit to overcome this cloud services barrier adoption and thus, to build confidence in cloud services and also to provide secure services. Finally, we will present some security threats and attacks, and propose, according to CSPM, some countermeasures.

  • Research Article
  • Cite Count Icon 80
  • 10.1007/s13202-022-01531-z
Predicting shear wave velocity from conventional well logs with deep and hybrid machine learning algorithms
  • Jul 11, 2022
  • Journal of Petroleum Exploration and Production Technology
  • Meysam Rajabi + 9 more

Shear wave velocity (VS) data from sedimentary rock sequences is a prerequisite for implementing most mathematical models of petroleum engineering geomechanics. Extracting such data by analyzing finite reservoir rock cores is very costly and limited. The high cost of sonic dipole advanced wellbore logging service and its implementation in a few wells of a field has placed many limitations on geomechanical modeling. On the other hand, shear wave velocity VS tends to be nonlinearly related to many of its influencing variables, making empirical correlations unreliable for its prediction. Hybrid machine learning (HML) algorithms are well suited to improving predictions of such variables. Recent advances in deep learning (DL) algorithms suggest that they too should be useful for predicting VS for large gas and oil field datasets but this has yet to be verified. In this study, 6622 data records from two wells in the giant Iranian Marun oil field (MN#163 and MN#225) are used to train HML and DL algorithms. 2072 independent data records from another well (MN#179) are used to verify the VS prediction performance based on eight well-log-derived influencing variables. Input variables are standard full-set recorded parameters in conventional oil and gas well logging data available in most older wells. DL predicts VS for the supervised validation subset with a root mean squared error (RMSE) of 0.055 km/s and coefficient of determination (R2) of 0.9729. It achieves similar prediction accuracy when applied to an unseen dataset. By comparing the VS prediction performance results, it is apparent that the DL convolutional neural network model slightly outperforms the HML algorithms tested. Both DL and HLM models substantially outperform five commonly used empirical relationships for calculating VS from Vp relationships when applied to the Marun Field dataset. Concerns regarding the model's integrity and reproducibility were also addressed by evaluating it on data from another well in the field. The findings of this study can lead to the development of knowledge of production patterns and sustainability of oil reservoirs and the prevention of enormous damage related to geomechanics through a better understanding of wellbore instability and casing collapse problems.Graphical abstract

  • Research Article
  • Cite Count Icon 8
  • 10.1007/s10586-015-0468-2
A virtualization mechanism for real-time multimedia-assisted mobile food recognition application in cloud computing
  • Sep 1, 2015
  • Cluster Computing
  • Parisa Pouladzadeh + 4 more

The integration of multimedia-assisted healthcare systems with could-computing services and mobile technologies has led to increased accessibility for healthcare providers and patients. Utilizing cloud computing infrastructures and virtualization technologies allows for the transformation of traditional healthcare systems that demand manual care and monitoring to more salient, automatic and cost effective systems. The goal of this paper is to develop a multimedia-assisted mobile healthcare application using cloud-computing virtualization technologies. We consider calorie measurement as an example healthcare application that can benefit from cloud-computing virtualization technology. The key functionalities of our application entail image segmentation, image processing and deep learning algorithms for food classification and recognition. Client side devices (e.g. smartphones, tablets etc.) have limitations in handling time sensitive and computationally intensive algorithms pertained to our application. Image processing and deep learning algorithms, used in food recognition and calorie measurement, consume devices' batteries quickly, which is inconvenient for the user. It is also very challenging for client side devices to scale for large number of data and images, as needed for food recognition. The entire process is time-consuming and inefficient and discomforting from users' perspective and may deter them from using the application. In this paper, we address these challenges by proposing a virtualization mechanism in cloud computing that utilizes the Android architecture. Android allows for parting an application into activities run by the front-end user and services run by the back-end tasks. In the proposed virtualization mechanism, we use both the hosted and the hypervisor models to publish our Android-based food recognition and calorie measurement application in the cloud. By so doing, the users of our application can control their virtual smartphone operations through a dedicated client application installed on their smartphones, while the processing of the application continue to run on the virtual Android image even if the user is disconnected due to any unexpected event. We have performed several experiments to validate our mechanism. Specifically, we have run our deep learning and image processing algorithms for food recognition on different configuration platforms on both the cloud and local server connected to the mobile. The results show that the accuracy of the system with the virtualization mechanism is more than 94.33 % compared to 87.16 % when we run the application locally. Also, with our virtualization mechanism the results are processed 49 % faster than the case of running the application locally.

Save Icon
Up Arrow
Open/Close