Daily Water Volume Prediction Algorithm of Urban Smart Water Based on Big Data
Daily Water Volume Prediction Algorithm of Urban Smart Water Based on Big Data
- Research Article
1867
- 10.1007/s10708-013-9516-8
- Nov 29, 2013
- GeoJournal
‘Smart cities’ is a term that has gained traction in academia, business and government to describe cities that, on the one hand, are increasingly composed of and monitored by pervasive and ubiquitous computing and, on the other, whose economy and governance is being driven by innovation, creativity and entrepreneurship, enacted by smart people. This paper focuses on the former and, drawing on a number of examples, details how cities are being instrumented with digital devices and infrastructure that produce ‘big data’. Such data, smart city advocates argue enables real-time analysis of city life, new modes of urban governance, and provides the raw material for envisioning and enacting more efficient, sustainable, competitive, productive, open and transparent cities. The final section of the paper provides a critical reflection on the implications of big data and smart urbanism, examining five emerging concerns: the politics of big urban data, technocratic governance and city development, corporatisation of city governance and technological lock-ins, buggy, brittle and hackable cities, and the panoptic city.
- Research Article
19
- 10.1007/s40273-015-0378-4
- Jan 25, 2016
- PharmacoEconomics
Big Data and Its Role in Health Economics and Outcomes Research: A Collection of Perspectives on Data Sources, Measurement, and Analysis.
- Conference Article
1
- 10.1109/hicss.2015.183
- Jan 1, 2015
HICSS-48 marks the beginning of a new mini-track on topics at the intersection of the Internet of Things and Big Data Analytics. The mini-track addresses issues organizations face as they seek to make use of data collected from mobile tracking devices such as RFID and other tracking and sensor technologies. Big data analytics is an increasingly important activity that is driven by the pervasive diffusion and adoption of RFID, mobile devices, social media tools, and the Internet of Things (IoT). The IoT allows for the connection and interaction of smart devices as they move and exist within today’s value chain. This allows for unprecedented process visibility that creates tremendous opportunities for operational and strategic benefits. However, the effective management of this visibility for improved decision making requires the combination and analysis of data from item-level identification using RFID, sensors, social media feeds, and cell phone GPS signals; in short, big data analytics. While the IoT and big data analytics have tremendous potential for transforming various industries, many scholars and practitioners are struggling to capture the business value from combining the IoT and big data analytics. In addition, little research has been conducted to assess the potential of the IoT using big data analytics. In this mini-track, we hope to develop a stream of research where researchers will share new and interesting theoretical and methodological perspectives on this topic. We believe the papers represented in this inaugural mini-track are a good kickoff to what we hope will be more exciting and enlightening each year. We open the mini-track with a paper entitled “Research Directions on the Adoption, Usage and Impact of the Internet of Things through the Use of Big Data Analytics” where Fred Riggins and Samuel Fosso Wamba bring into focus several of the important research questions this mini-track will address. The paper begins by defining current perspectives on the IoT and highlights current research in this area. It then proposes a framework for analyzing the adoption, usage and impact of the IoT enabled through big data analytics. The framework is applied to several research questions that need to be examined if researchers are to understand the non-technical issues related to the emergence of the IoT. Specifically, research questions are posed at four levels of analysis: the individual, organizational, industry, and societal levels. The second paper by Robert Minch is entitled “Location Privacy in the Era of the Internet of Things and Big Data Analytics.” As the IoT emerges there is concern that loss of privacy may occur that could impact individuals’ incentives to belong to online networks, interact using online social media, and engage in activities associated with being digital citizens. These privacy issues involve sensing activities, identification and authentication of identities, storage of personal information, processing of this information, incentives to share information, and the range of activities available to use this information. These six phases of information flow all take place within three different contexts: technical, social, and legal contexts. This paper examines these issues across these six phases of information flow and identifies example privacy measures that are being used, and can be used, for each phase. A literature review of existing research on the technical, social, and legal measures is provided. The third paper, “Dynamic Price Prediction for Amazon Spot Instances” by Vivek Kumar Singh and Kaushik Dutta illustrates the importance of being able to dynamically and efficiently price services in contexts such as the IoT. In the case examined in this paper, cloud vendors, such as Amazon Web Services, provide “spot instances” of cloud-based resources that are dynamically priced through an auction mechanism. This paper develops a novel algorithm for spot price prediction that shows high accuracy of 9.4% Mean Absolute Percent Error (MAPE) for short term forecasting (one day ahead) and less than 20% MAPE for long term forecasting (five days ahead). Such novel pricing algorithms will find a place within the context of the IoT as spot services will need to be negotiated, priced, and provided with a short lead time. 2015 48th Hawaii International Conference on System Sciences
- Conference Article
- 10.15405/epsbs.2021.04.02.93
- Apr 30, 2021
The daily growing volume of electronic data poses challenges to traditional ways of organizing data storage, processing, and analysis. The feasibility of further research in this area is confirmed by the high demand for data storage and analytical data processing services. The article is devoted to the study of big data security issues. Despite earlier technological capabilities, the term or Big data, has only been actively used in the last decade. This term, originally coming from the exact sciences, programming, has caused and continues to cause a lot of controversy: what exactly is meant by Big data. In the course of the work, the specifics of Big data were described, the main characteristics of Big data were highlighted, and modern approaches to big data protection were analyzed. The role of Big data in the development of the digital economy and the need to address issues of their legal regulation are outlined. The following problems of Big data protection were formulated: limiting the speed of data access, organizing access via network protocols through General-purpose networks. A review of known developers of protection methods in these areas and recommendations for their application is made. Problems of legal regulation of Big data are revealed. The complexity of the problem, the need for multi-level protection and the use of modern methodological, theoretical and software developments from the first steps of working with Big data are shown.
- Research Article
198
- 10.1016/j.cities.2020.102992
- Nov 20, 2020
- Cities
The analysis of big data is deemed to define a new era in urban research, planning and policy. Real-time data mining and pattern detection in high-frequency data can now be carried out at a large scale. Novel analytical practices promise smoother decision-making as part of a more evidence-based and smarter urbanism, while critical voices highlight the dangers and pitfalls of instrumental, data-driven city making to urban governance. Less attention has been devoted to identifying the practical conditions under which big data can realistically contribute to addressing urban policy problems. In this paper, we discuss the value and limitations of big data for long-term urban policy and planning. We first develop a theoretical perspective on urban analytics as a practice that is part of a new smart urbanism. We identify the particular tension of opposed temporalities of high-frequency data and the long durée of structural challenges facing cities. Drawing on empirical studies using big urban data, we highlight epistemological and practical challenges that arise from the analysis of high-frequency data for strategic purposesand formulate propositions on the ways in which urban analytics can inform long-term urban policy.
- Research Article
373
- 10.1088/1748-9326/ab1b7d
- Jul 1, 2019
- Environmental Research Letters
Big Data and machine learning (ML) technologies have the potential to impact many facets of environment and water management (EWM). Big Data are information assets characterized by high volume, velocity, variety, and veracity. Fast advances in high-resolution remote sensing techniques, smart information and communication technologies, and social media have contributed to the proliferation of Big Data in many EWM fields, such as weather forecasting, disaster management, smart water and energy management systems, and remote sensing. Big Data brings about new opportunities for data-driven discovery in EWM, but it also requires new forms of information processing, storage, retrieval, as well as analytics. ML, a subdomain of artificial intelligence (AI), refers broadly to computer algorithms that can automatically learn from data. ML may help unlock the power of Big Data if properly integrated with data analytics. Recent breakthroughs in AI and computing infrastructure have led to the fast development of powerful deep learning (DL) algorithms that can extract hierarchical features from data, with better predictive performance and less human intervention. Collectively Big Data and ML techniques have shown great potential for data-driven decision making, scientific discovery, and process optimization. These technological advances may greatly benefit EWM, especially because (1) many EWM applications (e.g. early flood warning) require the capability to extract useful information from a large amount of data in autonomous manner and in real time, (2) EWM researches have become highly multidisciplinary, and handling the ever increasing data volume/types using the traditional workflow is simply not an option, and last but not least, (3) the current theoretical knowledge about many EWM processes is still incomplete, but which may now be complemented through data-driven discovery. A large number of applications on Big Data and ML have already appeared in the EWM literature in recent years. The purposes of this survey are to (1) examine the potential and benefits of data-driven research in EWM, (2) give a synopsis of key concepts and approaches in Big Data and ML, (3) provide a systematic review of current applications, and finally (4) discuss major issues and challenges, and recommend future research directions. EWM includes a broad range of research topics. Instead of attempting to survey each individual area, this review focuses on areas of nexus in EWM, with an emphasis on elucidating the potential benefits of increased data availability and predictive analytics to improving the EWM research.
- Conference Article
2
- 10.1109/cbd.2017.30
- Aug 1, 2017
With the rapid development of the Internet, big data era comes, and the Internet, Internet of things, vehicular networks are blending mutually. We need to study and solve the problem of how to process the big data. In the field of traffic, forecasting traffic flow precisely in real time is the prerequisite and basis to process the big data effectively. However, the traditional prediction algorithm of traffic flow cannot be applied to big traffic data prediction. In order to forecast big traffic flow, this paper realizes the SKmeans and SGD based online RBFNN prediction algorithm on the Storm platform. In order to achieve effective prediction, the parallelization of the algorithm is designed. In addition, combined with vertical and horizontal parallelization, the algorithm is implemented. The Experiments show that the algorithm is feasible and the accuracy can be guaranteed.
- Research Article
4
- 10.1186/s13638-020-01851-w
- Nov 2, 2020
- EURASIP Journal on Wireless Communications and Networking
In order to evaluate the airport's comprehensive service capabilities, this paper considers the impact of air quality and noise on the airport environment under the big data of air traffic activities. In this study, the concept of environmental traffic capacity and big data are applied to the air traffic field. Recently, the airport air and noise pollution has been widely investigated and has become one of the major concerns of the potentially exposed people. This study explores the usage of governmental ambient air quality and noise standards to evaluate the airport operation capacities in the context of the era of big data. The first step is to analyze the typical airport operation scenario as the evaluation scenario. The second step is to use the air and noise emission assessment model for calculating the airport maximum air pollutant concentration and noise level. The final step is to establish a complete airport environment traffic capacity (AETC) evaluation process. As a case study, the capacity evaluation of Nanjing Lukou international airport (NKG) is performed using the above steps. In this case, significant associations between the pollutant concentrations/noise level and the air traffic volume were observed. The AETC of NKG was calculated with the established evaluation process successfully. The results show that the NKG maximum hourly air traffic volume is 120, daily air traffic volume is 770, and annual air traffic volume is 365,805, meeting the China Ambient Air Quality and Noise Standards. Although different air pollutants were investigated in this research, only the NOx was found to be the species that approaching the China governmental standards in this case. Thus, the airport NOx concentration was selected as the AETC limitation factor.
- Research Article
14
- 10.1016/j.adro.2020.03.005
- Apr 6, 2020
- Advances in Radiation Oncology
Efforts to Reduce the Impact of Coronavirus Disease 2019 Outbreak on Radiation Oncology in Taiwan
- Research Article
51
- 10.1109/access.2019.2936941
- Jan 1, 2019
- IEEE Access
Smart urban transportation management can be considered as a multifaceted big data challenge. It strongly relies on the information collected into multiple, widespread, and heterogeneous data sources as well as on the ability to extract actionable insights from them. Besides data, full stack (from platform to services and applications) Information and Communications Technology (ICT) solutions need to be specifically adopted to address smart cities challenges. Smart urban transportation management is one of the key use cases addressed in the context of the EUBra-BIGSEA ( Europe-Brazil Collaboration of Big Data Scientific Research through Cloud-Centric Applications) project. This paper specifically focuses on the City Administration Dashboard, a public transport analytics application that has been developed on top of the EUBra-BIGSEA platform and used by the Municipality stakeholders of Curitiba, Brazil, to tackle urban traffic data analysis and planning challenges. The solution proposed in this paper joins together a scalable big and fast data analytics platform, a flexible and dynamic cloud infrastructure, data quality and entity matching algorithms as well as security and privacy techniques. By exploiting an interoperable programming framework based on Python Application Programming Interface (API), it allows an easy, rapid and transparent development of smart cities applications.
- Conference Article
14
- 10.1145/3410566.3410598
- Aug 12, 2020
Big data are everywhere nowadays. Many businesses possess big data for their success because big data are very useful and are considered as new oil. For instance, big data are very important in predicting the trends on what will happen in the future. Many researchers have generated or gathered data to further enhance their research and to apply them to numerous real-life applications. Examples of big data include healthcare patient data. To improve the detection of illnesses and diseases, researchers have gathered healthcare patient data, examined the diagnosis on healthcare patient data (e.g., cells, blood count, antibodies count), and compared with previous data to determine if a specific illness or disease exist. Having an automatic predictive method for healthcare and disease analytics would be desirable. In this paper, we focus on healthcare mining, which aims to computationally discover knowledge from healthcare data. In particular, we present a data science framework with two predictive analytic algorithms for accurate prediction on the trends of cancer cases. The algorithms predict cancerous cells based on the information of the cell data from some data samples. Evaluation results on several real-life datasets related to the breast cancer demosntrate the effectiveness of our data science framework and predictive algorithms in healthcare data analytics.
- Research Article
74
- 10.5204/mcj.620
- Mar 2, 2013
- M/C Journal
The objective of the paper is to reflect on the affordances of different techniques for making Twitter collections and to suggest the use of a random sampling technique, made possible by Twitter’s Streaming API (Application Programming Interface), for baselining, scoping, and contextualising practices and issues. It discusses this technique by analysing a one per cent sample of all tweets posted during a 24-hour period and introducing a number of analytical directions considered useful for qualifying some of the core elements of the platform, in particular hashtags. To situate the proposal, the report first discusses how platforms propose particular affordances but leave considerable margins for the emergence of a wide variety of practices. This argument is then related to the question of how medium and sampling technique are intrinsically connected. Background Social media platforms present numerous challenges to empirical research, making it different from researching cases in offline environments, but also different from studying the “open” Web. Because of the limited access possibilities and the sheer size of platforms like Facebook or Twitter, the question of delimitation, i.e. the selection of subsets to analyse, is particularly relevant. Whilst sampling techniques have been thoroughly discussed in the context of social science research, sampling procedures in the context of social media analysis are far from being fully understood. Even for Twitter, a platform having received considerable attention from empirical researchers due to its relative openness to data collection, methodology is largely emergent. In particular the question of how smaller collections relate to the entirety of activities of the platform is quite unclear. Recent work comparing case based studies to gain a broader picture and the development of graph theoretical methods for sampling are certainly steps in the right direction, but it seems that truly large-scale Twitter studies are limited to computer science departments, where epistemic orientation can differ considerably from work done in the humanities and social sciences.
- Research Article
1
- 10.1002/for.3034
- Sep 29, 2023
- Journal of Forecasting
Historical tourism volume, search engine data, and weather calendar data have close causal relationship with daily tourism volume. However, when used in the prediction of daily tourism volume, the feature variables of the huge and complex search engine data do not have strong independence. These repetitive and highly relevant data must be analyzed and selected; otherwise, they will increase the training burden of neural network and reduce the prediction effect. This study proposes a daily tourism volume prediction model, maximum correlation minimum redundancy feature selection and long short‐term memory, on the basis of feature selection and deep learning. Firstly, the multivariate high‐dimensional features, including search engine data and weather factors, are selected to identify the key influencing factors. Secondly, the deep neural network is used to make a multistep forward rolling prediction of daily tourism volume. Results show that keywords of famous scenic spots, weather, historical tourism volume, and tourism strategies in the search engine data significantly improve the prediction accuracy of daily tourism volume. The proposed maximum correlation minimum redundancy feature selection and long short‐term memory model performs better than other models, such as autoregressive integrated moving average, multiple regression, support vector machine, and long short‐term memory.
- Research Article
3
- 10.2196/19055
- Apr 8, 2021
- JMIR Medical Informatics
BackgroundBig data technology provides unlimited potential for efficient storage, processing, querying, and analysis of medical data. Technologies such as deep learning and machine learning simulate human thinking, assist physicians in diagnosis and treatment, provide personalized health care services, and promote the use of intelligent processes in health care applications.ObjectiveThe aim of this paper was to analyze health care data and develop an intelligent application to predict the number of hospital outpatient visits for mass health impact and analyze the characteristics of health care big data. Designing a corresponding data feature learning model will help patients receive more effective treatment and will enable rational use of medical resources.MethodsA cascaded depth model was successfully implemented by constructing a cascaded depth learning framework and by studying and analyzing the specific feature transformation, feature selection, and classifier algorithm used in the framework. To develop a medical data feature learning model based on probabilistic and deep learning mining, we mined information from medical big data and developed an intelligent application that studies the differences in medical data for disease risk assessment and enables feature learning of the related multimodal data. Thus, we propose a cascaded data feature learning model.ResultsThe depth model created in this paper is more suitable for forecasting daily outpatient volumes than weekly or monthly volumes. We believe that there are two reasons for this: on the one hand, the training data set in the daily outpatient volume forecast model is larger, so the training parameters of the model more closely fit the actual data relationship. On the other hand, the weekly and monthly outpatient volume is the cumulative daily outpatient volume; therefore, errors caused by the prediction will gradually accumulate, and the greater the interval, the lower the prediction accuracy.ConclusionsSeveral data feature learning models are proposed to extract the relationships between outpatient volume data and obtain the precise predictive value of the outpatient volume, which is very helpful for the rational allocation of medical resources and the promotion of intelligent medical treatment.
- Conference Article
14
- 10.1109/cyberc.2019.00081
- Oct 1, 2019
The objective of this research is to predict the daily bus passenger flow volume in a given bus line and compare the prediction performances in the case using whole weekday data against the case using weekday-only data. Based on the real data collected from the bus IC card payment devices in Jiaozuo City, we firstly obtained time series plots on the daily passenger volume and then proposed ARIMA models to do the prediction. The results show that the the operation of including weekend data is necessary to improve the prediction performance.
- Research Article
- 10.13190/j.jbupt.2020-188
- Aug 28, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
- 10.13190/j.jbupt.2020-242
- Aug 28, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
- 10.13190/j.jbupt.2020-271
- Aug 28, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
1
- 10.13190/j.jbupt.2020-137
- Aug 28, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
- 10.13190/j.jbupt.2020-270
- Aug 28, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
- 10.13190/j.jbupt.2020-239
- Aug 28, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
- 10.13190/j.jbupt.2020-237
- Aug 28, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
- 10.13190/j.jbupt.2020-229
- Jun 23, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
- 10.13190/j.jbupt.2020-151
- Jun 23, 2021
- Journal of Beijing University of Posts and Telecommunications
- Research Article
- 10.13190/j.jbupt.2020-190
- Jun 23, 2021
- Journal of Beijing University of Posts and Telecommunications
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.