Guest Editorial Introduction for the Special Section on Deep Learning Algorithms and Systems for Enhancing Security in Cloud Services

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

introduction Share on Guest Editorial Introduction for the Special Section on Deep Learning Algorithms and Systems for Enhancing Security in Cloud Services Editors: Gunasekaran Manogaran Howard University, Washington D.C., USA Howard University, Washington D.C., USAView Profile , Hassan Qudrat-Ullah York University, Toronto, Canada York University, Toronto, CanadaView Profile , Qin Xin University of the Faroe Islands, Faroe Islands University of the Faroe Islands, Faroe IslandsView Profile , Latifur Khan The University of Texas at Dallas, Texas, USA The University of Texas at Dallas, Texas, USAView Profile Authors Info & Claims ACM Transactions on Internet TechnologyVolume 22Issue 2May 2022 Article No.: 39epp 1–5https://doi.org/10.1145/3516806Online:14 May 2022Publication History 0citation49DownloadsMetricsTotal Citations0Total Downloads49Last 12 Months49Last 6 weeks6 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my Alerts New Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access

Similar Papers
  • Research Article
  • Cite Count Icon 19
  • 10.1109/tsc.2020.2996382
Joint Pricing and Security Investment in Cloud Security Service Market With User Interdependency
  • May 22, 2020
  • IEEE Transactions on Services Computing
  • Shaohan Feng + 5 more

After several decades of development on cyber security techniques, one clear conclusion can be drawn: no cyber security solution can completely remove the risks faced by the users. In this regard, cyber-insurance has been introduced as a means to enable the users to alleviate the damage from the cyber threats by transferring the cyber risks to an insurer. In this article, we study a cloud security service market, which is composed of cloud users and cloud security service vendors (CSSVs). The CSSVs work as the insurers for selling the cloud security plan, which is consisted of cloud security service and cloud-insurance. The users in the cloud platform can purchase the cloud security plan from the CSSVs to secure their cloud service. If the cloud service is attacked and loss happens, the users will receive the claim from the CSSVs. To lower the successful attack probability, the CSSV has an incentive to invest in improving its cloud security service. Specifically, we model and study the cloud security service market in the framework of a two-stage Stackelberg game. On the upper stage, the CSSVs lead to decide on their own strategies, i.e., the price of the cloud security plan and the security investment to improve their offered cloud security service. On the lower stage, the users follow to decide on the purchase of the cloud security plan according to the price of the cloud security plan and the perceived cyber breach probability of the cloud security service. We analytically verify that the Stackelberg equilibrium exists and is unique. Extensive simulations have been conducted to evaluate the performance of the Stackelberg game. The performance evaluation shows some insightful results. For example, when the users have strong interdependency, the profits of the CSSVs become lower.

  • Research Article
  • Cite Count Icon 23
  • 10.1016/j.cosrev.2017.08.001
A survey on design and implementation of protected searchable data in the cloud
  • Aug 24, 2017
  • Computer Science Review
  • Rafael Dowsley + 3 more

A survey on design and implementation of protected searchable data in the cloud

  • Conference Instance
  • Cite Count Icon 8
  • 10.1145/2378975
Proceedings of the 2012 workshop on Cloud services, federation, and the 8th open cirrus summit
  • Sep 21, 2012

Welcome to the Workshop on Cloud Services, Federation, and the 8th Open Cirrus Summit held in conjunction with the International Conference on Autonomic Computing 2012 in San Jose, on 21 September 2012. This workshop brings together researchers and practitioners to discuss the newest ideas and challenges in cloud services and federated cloud computing. The workshop consists of the presentation of peer reviewed papers to the workshop participants and the active discussion of topics related to the topic. Furthermore, we have planned a panel discussion. The services offered by clouds are becoming critical for a wide variety of applications used by industry, education and government. There are now many examples of successful cloud services offered by public, private and community clouds. Many efforts exist that are creating cloud toolkits and frameworks to simplify the development and delivery of cloud services. The main purpose of this workshop is to bring together those responsible for designing, managing, and operating clouds services so that they can share experiences with each other. The workshop also welcomes users with requirements for new cloud services. We are particularly interested in cloud services that can be used for federating clouds. Topics of interest include: Experiences, best practices, and lessons learned from operating cloud services; Testbeds for designing new cloud services; Cloud services for federating clouds; Management and provisioning of cloud services; Health and status monitoring of cloud services; Security of cloud services; Requirements for new cloud services; Reliability and fault tolerance of cloud services; Cloud services that span public and private clouds including the design of cloud services, Intercloud services, Federation services Identity services Cloud bursting services, Cloud services for emerging applications; Applications utilizing such services; Cloud Software and Tools for IaaS, PaaS, Hadoop, and others. This workshop builds upon the success of the prior Open Cirrus events and the prior Open Cloud Consortium and FutureGrid events but was expanded to include other organizations. The goal is to help building a community for those responsible for operating clouds and cloud testbeds, as well as those interested in designing new cloud services.

  • Conference Article
  • 10.1109/trustcom.2016.0268
SECUPerf: End-to-End Security and Performance Assessment of Cloud Services
  • Aug 1, 2016
  • Ajay Pantangi + 2 more

As the International Data Corporation (IDC) estimated, public cloud services spending reached nearly 70B in 2015. While enterprise and public adoption of cloud applications and services is accelerating over time, security and performance remain as dominant concerns. As many studies try to address the security and performance of cloud data centers and cloud networks, it is unclear how to evaluate the end-to-end security and performance of cloud services. In this paper, we design and develop SECUPerf, a security and performance assessment approach, to address the aforementioned question. Furthermore, we leverage the Global Environment for Network Environments (GENI) to collect the security and performance metrics of cloud services and then evaluate SECUPerf on GENI. This paper focuses on the design and experimental evaluation procedure of SECUPerf. Our experiments show their effectiveness.

  • Conference Article
  • Cite Count Icon 204
  • 10.1109/icse.2019.00107
CRADLE: Cross-Backend Validation to Detect and Localize Bugs in Deep Learning Libraries
  • May 1, 2019
  • Hung Viet Pham + 3 more

Deep learning (DL) systems are widely used in domains including aircraft collision avoidance systems, Alzheimer's disease diagnosis, and autonomous driving cars. Despite the requirement for high reliability, DL systems are difficult to test. Existing DL testing work focuses on testing the DL models, not the implementations (e.g., DL software libraries) of the models. One key challenge of testing DL libraries is the difficulty of knowing the expected output of DL libraries given an input instance. Fortunately, there are multiple implementations of the same DL algorithms in different DL libraries. Thus, we propose CRADLE, a new approach that focuses on finding and localizing bugs in DL software libraries. CRADLE (1) performs cross-implementation inconsistency checking to detect bugs in DL libraries, and (2) leverages anomaly propagation tracking and analysis to localize faulty functions in DL libraries that cause the bugs. We evaluate CRADLE on three libraries (TensorFlow, CNTK, and Theano), 11 datasets (including ImageNet, MNIST, and KGS Go game), and 30 pre-trained models. CRADLE detects 12 bugs and 104 unique inconsistencies, and highlights functions relevant to the causes of inconsistencies for all 104 unique inconsistencies.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/iccsee.2012.184
Cloud Security Service Providing Schemes Based on Mobile Internet Framework
  • Mar 1, 2012
  • Lian-Chi Zhou + 1 more

This paper deals with the issues about dynamic cloud security services in mobile internet framework, where some important differences compared with traditional cloud security service exist, such as the complexity, mobility, openness and instability of the user groups. In view of these features, different enterprises and users may have different demands for cloud security services. Therefore, in order to provide different users with different levels of cloud security services, this paper proposed: a cloud service access control model which supports the permission changes, a cloud security service customizing architecture for differential security demands, and a security self-adaptive mechanism for cloud service. These three sub-schemes can help realize the controllability, customizability and adaptability of the cloud security service.

  • Research Article
  • Cite Count Icon 509
  • 10.1016/j.preteyeres.2019.04.003
Deep learning in ophthalmology: The technical and clinical considerations.
  • Apr 29, 2019
  • Progress in Retinal and Eye Research
  • Daniel S.W Ting + 11 more

Deep learning in ophthalmology: The technical and clinical considerations.

  • Research Article
  • 10.30837/rt.2023.1.212.04
Models of threats to cloud services
  • Mar 28, 2023
  • Radiotekhnika
  • M.V Yesina + 2 more

Cloud services have become popular due to their advantages over traditional computing. The cloud provides remote access to software, hardware, and other services. This has allowed companies to be more productive and enabled remote work. Cloud services have fewer hardware and infrastructure requirements, which reduces the cost of maintaining and supporting information technology. The future success of organizations will depend, not least, on the extent to which they implement cloud computing in their operations. According to forecasts, spending on cloud IT technologies will continue to grow and in 2025 will exceed spending on traditional IT technologies. Security of cloud services is becoming a critical issue as more and more companies complete their digital transformation. Despite the many benefits, cloud services also face their own security threats and challenges. Since cloud services store and process a significant amount of sensitive information, a cloud breach can lead to data leaks that can hinder business development and cause significant damage to a company's reputation. There are risks associated with the unavailability of cloud services in case of technical problems and dependence on external providers. Therefore, companies should carefully assess potential threats and take appropriate measures to protect their data and business in general when using cloud services. There are many methods to help determine how prepared your organization is for the growing number of threats. Threat modeling is one of the methods for predicting and preparing for possible threats. Using modeling frameworks allows you to allocate resources and plan possible actions during an attack. There are many modeling frameworks available, but it is important to remember that these frameworks have their advantages and disadvantages, so the choice depends on the context and needs of a particular system. Analyzing, evaluating, and comparing existing methods for modeling and protecting against threats in cloud services is the main objective of this article.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.jocmr.2025.101932
Development of a deep learning algorithm for detecting significant coronary artery stenosis in whole-heart coronary magnetic resonance angiography
  • Jan 1, 2025
  • Journal of Cardiovascular Magnetic Resonance
  • Masafumi Takafuji + 11 more

BackgroundWhole-heart coronary magnetic resonance angiography (CMRA) enables noninvasive and accurate detection of coronary artery stenosis. Nevertheless, the visual interpretation of CMRA is constrained by the observer's experience, necessitating substantial training. The purposes of this study were to develop a deep learning (DL) algorithm using a deep convolutional neural network to accurately detect significant coronary artery stenosis in CMRA and to investigate the effectiveness of this DL algorithm as a tool for assisting in accurate detection of coronary artery stenosis.MethodsNine hundred and fifty-one coronary segments from 75 patients who underwent both CMRA and invasive coronary angiography (ICA) were studied. Significant stenosis was defined as a reduction in luminal diameter of >50% on quantitative ICA. A DL algorithm was proposed to classify CMRA segments into those with and without significant stenosis. A four-fold cross-validation method was used to train and test the DL algorithm. An observer study was then conducted using 40 segments with stenosis and 40 segments without stenosis. Three radiology experts and three radiology trainees independently rated the likelihood of the presence of stenosis in each coronary segment with a continuous scale from 0 to 1, first without the support of the DL algorithm, then using the DL algorithm.ResultsSignificant stenosis was observed in 84 (8.8%) of the 951 coronary segments. Using the DL algorithm trained by the four-fold cross-validation method, the area under the receiver operating characteristic curve (AUC) for the detection of segments with significant coronary artery stenosis was 0.890, with 83.3% sensitivity, 83.6% specificity, and 83.6% accuracy. In the observer study, the average AUC of trainees was significantly improved using the DL algorithm (0.898) compared to that without the algorithm (0.821, p < 0.001). The average AUC of experts tended to be higher with the DL algorithm (0.897), but not significantly different from that without the algorithm (0.879, p = 0.082).ConclusionWe developed a DL algorithm offering high diagnostic accuracy for detecting significant coronary artery stenosis on CMRA. Our proposed DL algorithm appears to be an effective tool for assisting inexperienced observers to accurately detect coronary artery stenosis in whole-heart CMRA.

  • Research Article
  • Cite Count Icon 31
  • 10.1148/radiol.2021202803
Use of a Commercially Available Deep Learning Algorithm to Measure the Solid Portions of Lung Cancer Manifesting as Subsolid Lesions at CT: Comparisons with Radiologists and Invasive Component Size at Pathologic Examination.
  • Feb 2, 2021
  • Radiology
  • Yura Ahn + 6 more

Background The solid portion size of lung cancer lesions manifesting as subsolid lesions is key in their management, but the automatic measurement of such lesions by means of a deep learning (DL) algorithm needs evaluation. Purpose To evaluate the performance of a commercially available DL algorithm for automatic measurement of the solid portion of surgically proven lung adenocarcinomas manifesting as subsolid lesions. Materials and Methods Surgically proven lung adenocarcinomas manifesting as subsolid lesions on CT images between January 2018 and December 2018 were retrospectively included. Five radiologists independently measured the maximal axial diameter of the solid portion of lesions. The DL algorithm automatically segmented and measured the maximal axial diameter of the solid portion. Reader measurements, software measurements, and invasive component size at pathologic examination were compared by using intraclass correlation coefficient (ICC) and Bland-Altman plots. Results A total of 448 patients (mean age, 63 years ± 10 [standard deviation]; 264 women) with 448 lesions were evaluated (invasive component size, 3-65 mm). The measurement agreements between each radiologist and the DL algorithm were very good (ICC range, 0.82-0.89). When a radiologist was replaced with the DL algorithm, the ICCs ranged from 0.87 to 0.90, with an ICC of 0.90 among five radiologists. The mean difference between the DL algorithm and each radiologist ranged from -3.7 to 1.5 mm. The widest 95% limit of agreement between the DL algorithm and each radiologist (-15.7 to 8.3 mm) was wider than pairwise comparisons of radiologists (-7.7 to 13.0 mm). The agreement between the DL algorithm and invasive component size at pathologic evaluation was good, with an ICC of 0.67. Measurements by the DL algorithm (mean difference, -6.0 mm) and radiologists (mean difference, -7.5 to -2.3 mm) both underestimated invasive component size. Conclusion Automatic measurements of solid portions of lung cancer manifesting as subsolid lesions by the deep learning algorithm were comparable with manual measurements and showed good agreement with invasive component size at pathologic evaluation. © RSNA, 2021 Online supplemental material is available for this article.

  • Research Article
  • 10.31673/2412-9070.2022.040916
Security mechanisms in the cloud environment based on international standards
  • Jan 1, 2022
  • Connectivity
  • L V Dakova

A standardized functional approach to the conformity assessment procedure has been improved, based on the specifics of the functioning of cloud technologies. A review of the existing frameworks, which are used for the evaluation and certification of the Cloud Service Provider (further to the СSP), in terms of compliance with the requirements of generally recognized security standards, was carried out. The proposed levels of guarantees provide for the development of special requirements for ensuring the security of information systems of cloud service providers in accordance with the classification of criticality of systems and data of potential consumers of cloud services. Guided by regulatory acts, norms of international standards and already considered national schemes for evaluating the cyber security of cloud products, services and services, a generalized list of requirements for the security of cloud service providers has been formulated, which covers all the necessary conditions and corresponds to the proposed levels of guarantees. An assessment of compliance with security standards was carried out, which is the starting point for determining information security policy and combating threats inherent in cloud services. The division into three levels of security guarantees, which should be met by the СSP when evaluating compliance, is proposed depending on the business needs of users and the criticality of the data processed and stored by the cloud information system. A generalized scheme of security requirements for СSP has been developed, built on the basis of well-known frameworks, which takes into account a multi-level approach to security guarantees, distributed responsibility for compliance with the listed requirements depending on the functioning model and determines the components of the cloud architecture that are sensitive to certain conditions. This article combines all the best standards of the United States and the European Union and the best security practices for using a cloud environment that is considered the most dangerous from the point of view of information security, but convenient to use.

  • Conference Article
  • Cite Count Icon 71
  • 10.1145/3377811.3380391
Importance-driven deep learning system testing
  • Jun 27, 2020
  • Simos Gerasimou + 3 more

Deep Learning (DL) systems are key enablers for engineering intelligent applications due to their ability to solve complex tasks such as image recognition and machine translation. Nevertheless, using DL systems in safety- and security-critical applications requires to provide testing evidence for their dependable operation. Recent research in this direction focuses on adapting testing criteria from traditional software engineering as a means of increasing confidence for their correct behaviour. However, they are inadequate in capturing the intrinsic properties exhibited by these systems. We bridge this gap by introducing DeepImportance, a systematic testing methodology accompanied by an Importance-Driven (IDC) test adequacy criterion for DL systems. Applying IDC enables to establish a layer-wise functional understanding of the importance of DL system components and use this information to assess the semantic diversity of a test set. Our empirical evaluation on several DL systems, across multiple DL datasets and with state-of-the-art adversarial generation techniques demonstrates the usefulness and effectiveness of DeepImportance and its ability to support the engineering of more robust DL systems.

  • Discussion
  • Cite Count Icon 14
  • 10.1148/radiol.2020200855
Three Reasons Why Artificial Intelligence Might Be the Radiologist's Best Friend.
  • Apr 21, 2020
  • Radiology
  • Rick R Van Rijn + 1 more

Three Reasons Why Artificial Intelligence Might Be the Radiologist's Best Friend.

  • Conference Article
  • Cite Count Icon 4
  • 10.1145/3019612.3019633
Assessing end-to-end performance and security in cloud computing
  • Apr 3, 2017
  • Kaiqi Xiong + 1 more

While most studies are concerned with the network performance and security of data centers in the cloud - a shared computing infrastructure, there is little research on the understanding of the end-to-end performance and security of cloud services offered by cloud providers. That is, while cloud providers promise to deliver cloud services that meet predefined Quality of Services (QoS), there is nowadays a lack of efficient tools for the verification of the performance and security of cloud services a user has received. Such research, however, plays an important role in the successful delivery of cloud services. In this paper, we present a systematic way to evaluate the end-to-end performance and security of cloud services in a shared computing infrastructure. We design and develop an end-to-end SECUrity and Performance assessment framework (SECUPerf), where we experimentally and analytically investigate the performance and security of the routers along the path of cloud services between cloud users and providers. Our experimental results have demonstrated the applicability and usefulness of SECUPerf in the cloud. SECUPerf is useful to all the users in the shared computing infrastructure.

  • Research Article
  • Cite Count Icon 72
  • 10.1007/s13202-021-01087-4
Prediction performance advantages of deep machine learning algorithms for two-phase flow rates through wellhead chokes
  • Feb 23, 2021
  • Journal of Petroleum Exploration and Production
  • Hossein Shojaei Barjouei + 6 more

Two-phase flow rate estimation of liquid and gas flow through wellhead chokes is essential for determining and monitoring production performance from oil and gas reservoirs at specific well locations. Liquid flow rate (QL) tends to be nonlinearly related to these influencing variables, making empirical correlations unreliable for predictions applied to different reservoir conditions and favoring machine learning (ML) algorithms for that purpose. Recent advances in deep learning (DL) algorithms make them useful for predicting wellhead choke flow rates for large field datasets and suitable for wider application once trained. DL has not previously been applied to predict QL from a large oil field. In this study, 7245 multi-well data records from Sorush oil field are used to compare the QL prediction performance of traditional empirical, ML and DL algorithms based on four influencing variables: choke size (D64), wellhead pressure (Pwh), oil specific gravity (γo) and gas–liquid ratio (GLR). The prevailing flow regime for the wells evaluated is critical flow. The DL algorithm substantially outperforms the other algorithms considered in terms of QL prediction accuracy. The DL algorithm predicts QL for the testing subset with a root-mean-squared error (RMSE) of 196 STB/day and coefficient of determination (R2) of 0.9969 for Sorush dataset. The QL prediction accuracy of the models evaluated for this dataset can be arranged in the descending order: DL > DT > RF > ANN > SVR > Pilehvari > Baxendell > Ros > Glbert > Achong. Analysis reveals that input variable GLR has the greatest, whereas input variable D64 has the least relative influence on dependent variable QL.

Save Icon
Up Arrow
Open/Close