Big data, AI, and mHealth–the digital evolution of cardiology
Big data, AI, and mHealth–the digital evolution of cardiology
- Conference Article
37
- 10.1145/3243176.3243190
- Nov 1, 2018
The complexity and diversity of big data and AI workloads make understanding them difficult and challenging. This paper proposes a new approachto modelling and characterizing big data and AI workloads. We consider each big data and AI workload as a pipeline of one or more classes of units of computation performed on different initial or intermediate data inputs. Each class of unit of computation captures the common requirements while being reasonably divorced from individual implementations, and hence we call it a data motif. For the first time, among a wide variety of big data and AI workloads, we identify eight data motifs that take up most of the run time of those workloads, including Matrix, Sampling, Logic, Transform, Set, Graph, Sort and Statistic. We implement the eight data motifs on different software stacks as the micro benchmarks of an open-source big data and AI benchmark suite --- BigDataBench 4.0 (publicly available from http://prof.ict.ac.cn/BigDataBench), and perform comprehensive characterization of those data motifs from perspective of data sizes, types, sources, and patterns as a lens towards fully understanding big data and AI workloads. We believe the eight data motifs are promising abstractions and tools for not only big data and AI benchmarking, but also domain-specific hardware and software co-design.
- Research Article
21
- 10.37394/232015.2023.19.111
- Dec 15, 2023
- WSEAS TRANSACTIONS ON ENVIRONMENT AND DEVELOPMENT
This paper examines how big data analytics and AI improve hospital supply chain sustainability. Hospitals are recognizing the need for eco-friendly operations due to environmental issues and rising healthcare needs. It analyzes data from 68 UK hospitals using a conceptual model and partial least squares regression-based structural equation modeling. The research begins by examining hospital supply networks' environmental impact. Energy use, trash, and transportation emissions are major issues. It then explains how big data analytics and AI can transform these implications. This study prioritizes big data analytics for inventory management, demand forecasting, and procurement. Hospitals can reduce inventory, waste, and supply shortages using data-driven insights, saving money and the environment. AI also boosts hospital supply chain logistics and transportation efficiency, according to the study. Fuel consumption, carbon emissions, and delivery routes are optimized by AI. Predictive maintenance preserves medical equipment. In conclusion, hospital supply chains benefit greatly from big data analytics and AI. Hospitals can improve the healthcare business, reduce their environmental impact, and preserve resources for future generations. Healthcare leaders, politicians, and researchers seeking data-driven solutions for sustainable hospital supply chains gain valuable insights.
- Front Matter
- 10.1088/1742-6596/1727/1/011001
- Jan 1, 2021
- Journal of Physics: Conference Series
The 2020 Big Data and Artificial Intelligence Conference was successfully held on September 17-18, 2020 online due to COVID-19 restrictions. The conference is devoted to current challenges in Big Data analytics & AI and comprises three tracks: business, technology and science.There were 3 major sections of the scientific track of the conference:-Cluster analysis-Applied systems for data analysis-Natural Language ProcessingThis volume of IOP Conference Series: Journal of Physics: Conference Series (JPCS) is a compilation of the accepted papers in Big Data and AI Conference 2020 and represents contributions that were presented in the conference.On behalf of the organization committee, I would like to thank all of the conference sponsors, partners and volunteers who made conference possible.Looking forward to meeting you during Big Data and AI Conference 2021.On behalf of organizing and program committees of the Big Data and AI Conference 2020Igor Balk, Big Data and AI Conference 2020 co-chair.
- Book Chapter
3
- 10.1007/978-3-030-78307-5_4
- Jan 1, 2022
Big Data and AI Pipeline patterns provide a good foundation for the analysis and selection of technical architectures for Big Data and AI systems. Experiences from many projects in the Big Data PPP program has shown that a number of projects use similar architectural patterns with variations only in the choice of various technology components in the same pattern. The project DataBench has developed a Big Data and AI Pipeline Framework, which is used for the description of pipeline steps in Big Data and AI projects, and supports the classification of benchmarks. This includes the four pipeline steps of Data Acquisition/Collection and Storage, Data Preparation and Curation, Data Analytics with AI/Machine Learning, and Action and Interaction, including Data Visualization and User Interaction as well as API Access. It has also created a toolbox which supports the identification and use of existing benchmarks according to these steps in addition to all of the different technical areas and different data types in the BDV Reference Model. An observatory, which is a tool, accessed via the toolbox, for observing the popularity, importance and the visibility of topic terms related to Artificial Intelligence and Big Data technologies has also been developed and is described in this chapter.
- Front Matter
- 10.1088/1742-6596/1405/1/011001
- Nov 1, 2019
- Journal of Physics: Conference Series
The 2019 Big Data and Artificial Intelligence Conference was successfully held on September 18-19, 2019 in Moscow, Russia. The conference is devoted to current challenges in Big Data analytics & AI and comprises three tracks: business, technology and science.There were 3 major sections of the scientific track of the conference:- Cluster analysis- Applied systems for data analysis- Natural Language ProcessingThis volume of IOP Conference Series: Journal of Physics: Conference Series (JPCS) is a compilation of the accepted papers in Big Data and AI Conference 2019 and represents contributions that were presented in the conference.On behalf of the organization committee, I would like to thank all of the conference sponsors, partners and volunteers who made conference possible.Looking forward to meeting you during Big Data and AI Conference 2020.On behalf of organizing and program committees of the Big Data and AI Conference 2019Igor Balk, Big Data and AI Conference 2019 co-chair.
- Research Article
- 10.26689/ief.v3i6.10967
- Jul 4, 2025
- International Education Forum
This study explores the application of big data and artificial intelligence in the CIPP evaluation system. The study outlines the definition and characteristics of the CIPP evaluation model, as well as big data and AI technologies. Subsequently, the study provides a detailed analysis of the application of big data and AI in each component of the CIPP evaluation system, including context evaluation, input evaluation, process evaluation, and outcome evaluation. The research reveals how big data and AI technologies empower educational evaluation, improving its accuracy and efficiency. Finally, it discusses future development trends and prospects, highlighting the potential and innovative directions of big data and AI technologies in the field of educational evaluation.
- Research Article
4
- 10.1007/978-1-0716-3441-7_16
- Sep 8, 2023
- Methods in molecular biology (Clifton, N.J.)
In the field of computer-aided drug design (CADD), there has been dramatic progress in the development of big data and AI-driven methodologies. The expensive and time-consuming process of drug design is related to biomedical complexity. CADD can be used to apply effective and efficient strategies to overcome obstacles in the field of drug design in order to properly design and develop a new medicine. To prepare the raw data for consistent and repeatable applications of big data and AI methodologies, data pre-processing methods are introduced. Big data and AI technologies can be used to develop drugs in areas including predicting absorption, distribution, metabolism, excretion, and toxicity properties as well as finding binding sites in target proteins and conducting structure-based virtual screenings. The accurate and thorough analysis of large amounts of biomedical data as well as the design of prediction models in the area of drug design is made possible by data pre-processing and applications of big data and AI skills. In the biomedical big data era, knowledge on the biological, chemical, or pharmacological structures of biomedical entities relevant to drug design should be analyzed with significant big data and AI approaches.
- Research Article
- 10.70088/82kdwb92
- Dec 17, 2025
- GBP Proceedings Series
In the digital era, big data and AI are core drivers transforming corporate financial management, addressing limitations of traditional models in data processing, decision support, and risk management. Big data (defined by the 5Vs and multi-layered architecture) and AI (featuring machine learning, NLP, and visual technology) complement each other: big data fuels AI training, while AI enhances data value mining. These technologies enable intelligent automated data processing, real-time dynamic decision support, and comprehensive risk management (identification, assessment, early warning). Key applications include financial data visualization, budget optimization, cash flow management, AI-driven automated financial processing, and real-time internal control. Critical challenges include data security risks, tech update pressures, talent shortages, and inadequate policies/standards. Solutions involve multi-dimensional data protection, proactive tech upgrading, interdisciplinary talent cultivation, and improved regulations/industry standards. The study concludes that big data and AI reshape financial management toward digitalization and intelligence. Future trends focus on integrating blockchain/quantum computing, deepening business-process integration, and advancing interdisciplinary ethical-legal research, offering insights for enterprises to optimize financial management via technological innovation.
- Research Article
- 10.1038/s41598-022-18724-5
- Sep 3, 2022
- Scientific Reports
Research Background, the intelligent polymorphic system of heavy core clustering fitting iterative programming is constructed by using the edge lens of dual core heavy core. The tracking system of heavy core TANH equilibrium array is used to obtain the abnormal data range. The energy regular fluctuation of the edge lens with dual core and heavy core is used to obtain high-definition images. And build the complexity dependent parameter group from low-end equipment to high-end equipment. Heavy core clustering of hierarchical fuzzy clustering system based on differential incremental balance theory is applied to Contactless medical equipment AI big data risk control and quasi thinking iterative planning. At the same time, the mathematical model risk control is performed by fitting the TANH balance of the local nonlinear random regular micro-vibration diffusion curve. The CT/MR original data is subjected to hierarchical cross domain overlapping grid screening with the structure of fitting weakly nonlinear curve, which can capture the heavy core cluster analysis of the core layer of big data anomalies [1:10]. Successfully control the parameter group of CT/MR machine internal data, big data AI Mathematical model risk. The polar graph of high-dimensional heavy core clustering processing data is regular and scientific. The same time, it can prevent the dimension disaster caused by the construction of high-dimensional big data due to the partial loss of original data, and form a stable and predictable maintenance of CT/MR. Compared with the discrete characteristics of the polar graph of the original data. So as to correctly detect and control the dynamic change process of CT/MR in the entire life cycle. It provides help for predictive maintenance of early pre-inspection and orderly maintenance of the medical system, and developed standardized model software of automated unsupervised learning for medical big equipment big data AI Mathematical model risk control. Scientifically evaluated the exposure time and heat capacity MHU% of CT tubes, as well as the internal law of MR (nuclear magnetic resonance), and processed big data twice and three times in heavy nuclear clustering. After optimizing the algorithm, hundreds of thousands of nonlinear random vibrations are performed in the operation and maintenance database every second, and at least 30 concurrent operations are formed, which greatly improves and shortens the operation time (Yanwei et al. in J Complex 2017:1–9, 2017. https://doi.org/10.1155/2017/3437854). Finally, after adding micro-vibration quasi thinking iterative planning for the uncertain structure of AI operation, we can successfully obtain the scientific and correct results required by high-dimensional information and analyze images. This kind of AI big data risk control improves the intelligent management ability of medical institutions. Cross platform embedded web system for predictable maintenance of AI big data is established (Qi et al. in J IEEE Trans Ind Inf 99:1, 2020. https://doi.org/10.1109/tii.2020.3012157).
- Research Article
3
- 10.54216/jcim.140101
- Jan 1, 2024
- Journal of Cybersecurity and Information Management
BD and AI are now transforming the banking and finance industry at a very fast pace, which is leading to change in the banking and finance institutions. This change is making them better, customer-oriented and financially rewarding organizations. Big data and AI have been useful in the banking and financial institutions to assess and manage the risks. Through the analysis of big amounts of unstructured data in real time, AI algorithms are capable of identifying risks. This makes it easy to put preventive measures in place to avert the risks. In addition, big data and AI have come a long way in solving the problem of fraud in banking and finance. This paper showed how big data and AI improve risk management, Cyber threat, and fraud in banking and finance by using data analysis and data pattern identification in real-time. That is why our work emphasizes the importance of implementing secure privacy and explaining the AI algorithm to eliminate ethical and Cyber security issues. Using analytical approaches, AI can identify the transactions with the help of comparison with the previous data and the behavioral characteristics related to the fraud. This approach to fraud prevention has been effective in reducing losses while at the same time improving the customer’s confidence in the company. On the other hand, there are disadvantages of big data and AI such as privacy, security, and ethical issues. Measures that can be used to safeguard customer information have to be employed in order to effectively safeguard the consumer data. Furthermore, transparency and accountability of the AI algorithms are crucial in order to avoid unfair decisions.
- Research Article
- 10.33146/2518-1181-2025-3(109)-5-13
- Jan 1, 2025
- Oblik i finansi
Today, the volume of data generated by insurance market participants is growing exponentially, and traditional Accounting Information Systems (AIS) cannot always provide analytical support in real time. Presenting a systematic analysis of the structure and functioning of information and analytical ecosystems of insurance business stakeholders (IBS) that integrate AIS, Big Data, and AI technologies, the article provides answers to three questions: Which key IBS generate and consume information flows within modern AIS, and how can these flows be classified by type and frequency of occurrence? How does integrating Big Data and AI technologies alter the structure, processing, and utilisation of information flows for financial accounting and managerial control of IBS? What synergistic effects does the combination of Big Data, AI, and AIS in IBS provide regarding financial data accuracy, process transparency, and the speed of managerial decision-making? The methodological basis of the study is a set of complementary methods, in particular, systematic analysis, a classification-typological approach, and structural-functional modelling. These methods allowed the identification of the main and auxiliary IBS, the classification of information flows according to their structure, frequency of receipt, and data sensitivity, and the construction of generalised schemes illustrating their interaction with AIS, Big Data, and AI. The researchers identified the main and auxiliary entities of the insurance business and classified information flows by structuring, frequency of receipt, and data sensitivity. The study results show that integrating Big Data and AI into AIS ensures accounting automation, accelerates management decision-making, and improves the accuracy of financial data and the transparency of management processes. The article develops models of multilevel interactions between technological components and IBS, demonstrating the synergistic effects of integrating advanced technologies. Insurance business stakeholders can use the results of this study to optimise the digital transformation of their AIS, enhance risk management efficiency, and support the development of personalised insurance products.
- Book Chapter
1
- 10.4324/9780429022241-9
- Aug 19, 2020
This chapter challenges the assumption that data privacy frameworks in general and the GDPR in particular can provide an appropriate regulatory solution for big data. It argues that in order to be able to properly reflect on regulatory approaches that wrestle with big data challenges, closer attention should be paid to these particular challenges. In this respect, this chapter makes three distinct contributions to the debate regarding regulatory approaches to big data: First, it develops a taxonomy of big data challenges that allows a comprehensive overview of the issues at stake. Second, it examines the capabilities and limitations of the GDPR to address the risks identified in the proposed taxonomy. Third, it offers some suggestions on the pathways that regulators should be considering when approaching big data and AI.
- Research Article
- 10.35930/kjpr.34.1.10
- Jun 30, 2021
- Korean Juvenile Protection Review
In this study, we tried to find ways to utilize big data and artificial intelligence for policy improvement for the prevention and follow-up of youth cyberbullying. To this end, after briefly reviewing the concepts and principles of big data and artificial intelligence in the theoretical discussion section, the development and dissemination of youth cyberbullying indicators, and the development and dissemination of youth cyberbullying prevention apps, and the development and application of an AI robot that detects and bans profanity in the SNS content part was suggested as a policy measure for the prevention of youth cyberbullying, and the development and distribution of an artificial intelligence counseling chatbot for youth cyberbullying was reviewed as a policy measure for follow-up support. Next, in order to improve the validity of these theoretical discussions and discover new policy measures, a total of three related experts, consisting of two researchers on cyberbullying against youth and one big data and AI researcher from a national policy research institute, were invited on May 29, 2020. An expert FGI was conducted on the topic of online-based cyberbullying prevention and follow-up support measures using big data and AI. Through this process, finally (1) the development and provision of information related to youth cyberbullying using big data analysis, (2) the development and dissemination of an application dedicated to preventing youth cyberbullying, (3) the development and application of AI robots detecting and acting of content related to youth cyberbullying on SNS, and (4) the development and dissemination of AI counseling chatbots for youth cyberbullying were presented as policy tasks.
- Research Article
10
- 10.3390/electronics12081943
- Apr 20, 2023
- Electronics
The amount of data in the maritime domain is rapidly increasing due to the increase in devices that can collect marine information, such as sensors, buoys, ships, and satellites. Maritime data is growing at an unprecedented rate, with terabytes of marine data being collected every month and petabytes of data already being made public. Heterogeneous marine data collected through various devices can be used in various fields such as environmental protection, defect prediction, transportation route optimization, and energy efficiency. However, it is difficult to manage vessel related data due to high heterogeneity of such marine big data. Additionally, due to the high heterogeneity of these data sources and some of the challenges associated with big data, such applications are still underdeveloped and fragmented. In this paper, we propose the Vessel Data Lakehouse architecture consisting of the Vessel Data Lake layer that can handle marine big data, the Vessel Data Warehouse layer that supports marine big data processing and AI, and the Vessel Application Services layer that supports marine application services. Our proposed a Vessel Data Lakehouse that can efficiently manage heterogeneous vessel related data. It can be integrated and managed at low cost by structuring various types of heterogeneous data using an open source-based big data framework. In addition, various types of vessel big data stored in the Data Lakehouse can be directly utilized in various types of vessel analysis services. In this paper, we present an actual use case of a vessel analysis service in a Vessel Data Lakehouse by using AIS data in Busan area.
- Book Chapter
5
- 10.4018/978-1-6684-9285-7.ch012
- Nov 27, 2023
Big data and AI/ML pipeline models provide a good basis for analyzing and selecting technical architectures for big data and AI systems. The experience of many big data projects has shown that many projects use similar architectural models that differ only in the selection of different technological components in the same diagram. The big data and AI/ML pipeline framework are used to describe pipeline stages in big data and AI and ML projects, and supports the benchmark classification. This includes four pipeline stages: data acquisition/collection and storage, data preparation and storage, data analysis with artificial intelligence/machine learning, and performance and interaction, including data visualization, user interaction, and API access. The authors have also created a toolkit to help identify and leverage existing models by following the steps below and the two different technical areas and different data types within the framework.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.