ANALYSIS OF THE APPLICATION FEATURES OF PROBABILISTIC AND STATISTICAL METHODS OF DATA PROCESSING FOR FORECASTING AND MODELING CRISIS SITUATIONS
ANALYSIS OF THE APPLICATION FEATURES OF PROBABILISTIC AND STATISTICAL METHODS OF DATA PROCESSING FOR FORECASTING AND MODELING CRISIS SITUATIONS
- Research Article
- 10.1088/1742-6596/1774/1/011001
- Jan 1, 2021
- Journal of Physics: Conference Series
International Conference on Data Processing Algorithms and Models
- Research Article
- 10.6084/m9.figshare.10279274
- Nov 4, 2019
Current presentation reports Mid-Term progress of the PhD research. Research Goals: Investigating the geology and submarine geomorphology of the Pacific trenches.Technical improving and testing of the advanced algorithms of geodata analysis. Applying innovative methods in cartographic data visualization and mapping. Developing techniques of the automatic digitizing of the cross-section profiles. Sequential data processing & modelling by QGIS, Python, R, GMT, AWK, Octave. Automatization in geological data analysis aims at: precision and reliability of the results, increased speed of the data processing, accuracy and precision of the data modelling, crucial for the big data processing common for geological field marine observations. Geospatial analysis to identify variations and to highlights correlations between the geomorphic shape of the trenches (slope steepness gradient, depth ranges). Research Object: Deep-sea trenches of the Pacific Ocean. Research Focus. Submarine geomorphology of the trenches: comparative analysis of their structure. Seafloor bathymetry of the trenches: modelling spatial variations of their patterns. Impact factors affecting trench formation: highlighting their variability. Research Techniques. Methods: data analysis, processing, visualization, statistical modelling, cartographic mapping, 3D and 2D simulation models, graphical plotting. Tools: Generic Mapping Tools (GMT); QGIS plugins; statistical libraries of the programming languages: Python, R, Matlab/Octave and AWK. Presentation includes viculaized maps, preliminary statistical results and discussions.
- Conference Article
- 10.2991/icitmi-15.2015.110
- Jan 1, 2015
with the further development of Internet,rapid scales. Technology the expansion of Internet use is growing due to such rapid growth in this industry, the amount of data dealt by Internet of things becomes the feature of massive amount. The key problems that should be considered with, is how to efficiently processing these data, from which can get useful information, and then to provide intelligent decision. In connection with the new inquiry of massive data processing in the era of Internet of things. To understand the massive data processing technique is used in the Internet of things, through the analysis of massive, heterogeneous, Multi-dimension and dynamic network data. To explore from 4 aspects of the network data acquisition, data transmission, data model and storage, data processing applications, based on the data processing model of Internet of things technology system , construct a general network data storage, analysis and presentation model.
- Conference Article
2
- 10.1115/imece2021-71259
- Nov 1, 2021
This paper shows how the methodically modeling of product and process data of design methods using the example of modular lightweight design to improve data traceability and continuity can be implemented. Therefore, the underlying methods from the areas of modular lightweight design, Model-Based Systems Engineering and methodical process steps are analyzed and their challenges collected. As basis a data-driven linkage of product and process models for methods is used. To establish this within the method of the modular lightweight design, the six main steps were analyzed with regards to possible inconsistencies and links. In this context, it is particularly important which input and which output data are required or generated in each of these steps. Using this information, the link between the process and product data models can be established. With this link, the consistency and continuity of the data can be improved, as uses and changes in one step can be traced to other relevant steps. Because the data itself now is more consistent, the steps and the method itself provide a better consistency and continuity. A SysML-model is presented, with which the product data model can be consistently linked with the process data model, based on a data tree. This integrated process and product data model is applied to aircraft cabin development to develop a continuous, consistent models in a methodical manner.
- Conference Article
2
- 10.1109/jcai.2009.122
- Apr 1, 2009
Business process model and data model play important roles in information system construction. They represent two different perspectives of business knowledge, and are closely related. A trouble to the model quality is the inconsistency between business process model and data model, which can often conduce to interaction errors. While finding such inconsistency is a meaningful problem, it receives little attention in available verification methods. We concentrate on this problem and identify some consistency anomalies between process model and data model. In our paper, a verification method PDGV is proposed to verify the consistency between process model and data model. Implemented prototype reveals that our scheme can detect consistency anomalies effectively.
- Single Book
9
- 10.1596/1813-9450-9596
- Mar 1, 2021
While regulations on personal data diverge widely between countries, it is nonetheless possible to identify three main models based on their distinctive features: one model based on open transfers and processing of data, a second model based on conditional transfers and processing, and third a model based on limited transfers and processing. These three data models have become a reference for many other countries when defining their rules on the cross-border transfer and domestic processing of personal data. The study reviews their main characteristics and systematically identifies for 116 countries worldwide to which model they adhere for the two components of data regulation (i.e. cross-border transfers and domestic processing of data). In a second step, using gravity analysis, the study estimates whether countries sharing the same data model exhibit higher or lower digital services trade compared to countries with different regulatory data models. The results show that sharing the open data model for cross-border data transfers is positively associated with trade in digital services, while sharing the conditional model for domestic data processing is also positively correlated with trade in digital services. Country-pairs sharing the limited model, instead, exhibit a double whammy: they show negative trade correlations throughout the two components of data regulation. Robustness checks control for restrictions in digital services, the quality of digital infrastructure, as well as for the use of alternative data sources.
- Book Chapter
- 10.1007/978-3-319-91189-2_37
- May 27, 2018
In heterogeneous data processing, various data model often make analytic task too hard to achieve optimal performance, it is necessary to unify heterogeneous data into the same data model. How to determine the proper intermediate data model and unify the involved heterogeneous data models for the analytical task is an urgent problem need to be solved. In this paper, we proposed a model determination method based on cost estimation. It evaluates the execution cost of query tasks on different data models, which taken as the criterion to measure the data model, and chooses a data model with the least cost as the intermediate representation during data processing. The experimental results of BigBench datasets showed that the proposed cost estimation based method could appropriately determine the data model, which made heterogeneous data processing efficiently.
- Conference Article
- 10.2991/mmme-16.2016.18
- Jan 1, 2016
Collection and acquisition of water meter, electric meter, heat meter and gas meter is an effective way to improve the intelligence and automation level in the energy field. In this paper, the data collection technology scheme of public utilities was introduced. Using construction achievements and experience in the electric field, through the data integration of water, electric, gas and heat model, the requirements on shared storage for the kinds of meters’ data was reached from the database level. According to the actual situation of the field of flexible networking, the difficulty of centralized data collection and centralized communication from meters was solved through sharing upstream channel and employing the downlink channel with a variety of communication technology. In the ‘four meters’ data collection and data processing, the technology of used memory based data bus, the information flow of data processing model and address space mapping of real-time database improved the system's efficiency and performance. Finally the goal of public utilities data collection and comprehensive application are achieved. KEYWORD: Multi-application of polymorphism; Shared channel; Exclusive data channels; Data processing model; Distributed storage 4th International Conference on Mechanical Materials and Manufacturing Engineering (MMME 2016) © 2016. The authors Published by Atlantis Press 75 public resources, and further enhance the service level of social public utilities. Electricity utilization information acquisition system application fields are developed, relying on information collection system meter, terminal, communication channel and main station resources in provincial electric power company of State Grid Corporation of China electricity (Song Lei et al, 2004) (Jin Rongjiang et al, 2010), establish public utilities data acquisition platform, intensely collect ammeter, water meter, gas meter, heat meter data (the following are abbreviated the four meters are one), unified billing, unified paying, it can realize energy information resources sharing cross-industry, completely change public utilities management service mode (Xiong Hua, 2012). To improve the overall intelligent control and automatic management level in energy field. 2 CENTRALIZED DATA COLLECTION OF SYSTEM ARCHITECTURE The centralized data collection platform architecture of public utilities is shown in figure 1, this architecture learn from electric power industry utilization and collection system to design, make full use of its collection terminal and channel resources (Yang Yongbiao et al, 2012) (Wang Jiye et al, 2015), and cover the various types of water, gas and heat table, complete the tables for one and centralized data collection. Fig.1 System Architecture of Data Collection The whole architecture is divided into terminal equipment layer, network communication layer, front resolution layer, data layer, application layer five parts. Among them, the terminal equipment layer, water, electricity, gas, heat meter by communication module to link the upper smart meters or gathering module of concentrator, the collected data will be uploaded to the front analysis layer of main station (Zhan Tongping, 2014); the front analysis layer makes the data for data analysis and uploading to data layer based on the water, electricity, gas, heat meter communication protocol; the data layer integrate water (Liu Bing et al, 2007) (Xu Kunyao et al, 2011), electricity, gas and heat data model through public utilities data platform, and form a integrative panoramic data model, through the public utilities platform for accessing real-time collecting data of tables for one, the data platform and collecting static archives information and collecting information; The application layer calls data to display data and report query and other business functions (Cao Zhigang, 2013), meanwhile, the marketing business application implements customers’ files and synchronization of ammeter number through interface, and complete unified accounting and payment of water, electricity, gas and heat. 3 CENTRALIZED DATA COLLECTION OF SYSTEM ARCHITECTURE 3.1 The Integrated Data Modeling Technology Establishing water, electricity, gas, heat integrative panorama public resources data model is the basis of accomplishing the data concentration acquisition, the each industry respectively maintain private data model of their industry, the data model standard differences among industries are big, the data model can't share and exchange information. IEC61970 standard (Liu Haitao et al, 2008) (Wang Yiming, 2010) in the field of electric power is widely used, and the technology development of derivation, combination, checking of the model, it establishes the foundation for the energy industry and superior and the subordinate coordination model in industry. Since the electric power industry started earlier in the field of automated collection control, which have mature electric power equipment and model customer files model description standard, data model of public utilities platforms are expanded based on the electric power industry data model standard. Building integrated data model mainly includes model adaptation, model splicing, model validation, model dynamic analysis, model check, model release, eventually form panoramic data model that can support public utilities concentrated collection. The key nodes of model maintenance process of integrated data model as shown in Fig.2.
- Conference Article
1
- 10.1109/dessert58054.2022.10018658
- Dec 9, 2022
CubeSats development has become a fastgrowing industry during the last 20 years. Mainly it became available due to the general use of the COTS hardware and software components. Using COTS hardware and software solutions brought speed and easiness to the development process but led to many non-successful missions. Today, with the costs and availability of COTS hardware, memory and computation power resources are no longer the main bottleneck in designing a nanosatellite. This change opened a significant breakthrough in using more complex data structures and models in the CubeSat software platform components. These models from the broader spectrum of more extensive data footprint applications allow CubeSat developers to benefit from extra metadata and not just a non-parametrized binary chunk of bits and bytes. This paper proposes a data model for representing and handling the constants, measurement, and calculation results in the software for CubeSat. Along with the data model, processing methods are proposed that provide a framework for improving the quality of open-source software projects for experimental development and further support of CubeSat student nanosatellites. The basis of the concept is an effort to improve the methods and means of automatic verification and validation of nanosatellite software when it is the data handling correctness that determines the mission success of the satellite. Proposed in the conceptual data model and data processing methods provide the basis for static and dynamic data validation.
- Research Article
1
- 10.1111/1752-1688.13033
- Jun 7, 2022
- JAWRA Journal of the American Water Resources Association
ABSTRACTThe current traditional forecasting level evaluation method only uses the forecast error value series for relevant analysis, such as dividing the number of qualified forecasts by the total number of forecasts to indicate the corresponding forecasting accuracy or level. This does not consider the different external environments during the forecast. When assessing the forecasting level of different forecasters, fairness cannot be reflected to a certain extent. So this paper puts forward the concept of forecasting difficulty and analyses the physical significance of forecasting difficulty. Based on the traditional concepts of forecasting accuracy and error, two methods calculating the forecasting difficulty are established to consider different forecasting situations, such as rainy, rainless, different foresight periods, and different inflow levels. Further, a new comprehensive forecasting level evaluation method of forecasters is proposed. Taking the Guandi reservoir as an example, the case study results show that compared with the traditional method, the proposed forecasting level evaluation method can effectively consider the difficulty factors of different forecasting situations. In addition, the methods can better reflect the contribution of difficult situations to the comprehensive forecasting level when considering the improvement of forecasting accuracy, which makes the obtained results more scientific and reasonable.
- Research Article
1
- 10.3390/math11132883
- Jun 27, 2023
- Mathematics
The whole world has entered the era of the Vuca. Some traditional methods of problem analysis begin to fail. Complexity science is needed to study and solve problems from the perspective of complex systems. As a complex system full of volatility and uncertainty, price fluctuations have attracted wide attention from researchers. Therefore, through a literature review, this paper analyzes the research on complex theories on price prediction. The following conclusions are drawn: (1) The price forecast receives widespread attention year by year, and the number of published articles also shows a rapid rising trend. (2) The hybrid model can achieve higher prediction accuracy than the single model. (3) The complexity of models is increasing. In the future, the more complex methods will be applied to price forecast, including AI technologies such as LLM. (4) Crude-oil prices and stock prices will continue to be the focus of research, with carbon prices, gold prices, Bitcoin, and others becoming new research hotspots. The innovation of this research mainly includes the following three aspects: (1) The whole analysis of all the articles on price prediction using mathematical models in the past 10 years rather than the analysis of a single field such as oil price or stock price. (2) Classify the research methods of price forecasting in different fields, and found the common problems of price forecasting in different fields (including data processing methods and model selection, etc.), which provide references for different researchers to select price forecasting models. (3) Use VOSviewer to analyze the hot words appearing in recent years according to the timeline, find the research trend, and provide references for researchers to choose the future research direction.
- Research Article
20
- 10.1016/j.cageo.2005.12.010
- Feb 9, 2006
- Computers and Geosciences
A generalized web service model for geophysical data processing and modeling
- Research Article
5
- 10.4018/ijeis.2016040101
- Apr 1, 2016
- International Journal of Enterprise Information Systems
Data modelling is a complex process that depends on the knowledge and experience of the designers who carry it out. The quality of created models has a significant impact on the quality of successive phases of information systems development. This paper, in short, reviews the data modelling process, the entity-relationship method (ERM) and actors in the data modelling process. Further, in more detail it presents systems, methods, and tools for the data modelling process and identifies problems that occur during the development phase of an information system. These problems also represent the authors' motivation for conducting research that aims to develop a knowledge-based system (KBS) in order to support the data modelling process by applying formal language theory (particularly translation) during the process of conceptual modelling. The paper describes the main identified characteristics of the authors' new KB system that are derived from the analysis of existing systems, methods, and tools for the data modelling process. This represents the focus of the research.
- Conference Article
- 10.22323/1.282.0852
- Feb 6, 2017
Choices in persistent data models and data organization have significant performance ramifica- tions for data-intensive scientific computing. In experimental high energy physics, organizing file-based event data for efficient per-attribute retrieval may improve the I/O performance of some physics analyses but hamper the performance of processing that requires full-event access. In-file data organization tuned for serial access by a single process may be less suitable for opportunis- tic sub-file-based processing on distributed computing resources. Unique I/O characteristics of high-performance computing platforms pose additional challenges. The ATLAS experiment at the Large Hadron Collider employs a flexible I/O framework and a suite of tools and techniques for persistent data organization to support an increasingly heterogeneous array of data access and processing models.
- Conference Article
2
- 10.1145/1500774.1500848
- Jan 1, 1982
The Data Model Processor (DMP) is an interactive tool for defining and evaluating data models. It is based on Positional Set Notation, a formalism for uniform representation of data modeling objects. The DMP allows the user to enter a set-theoretic description of a data model's structures and a definition of the model's primitive operations based on positional set operations. Based on the data model definition, the DMP then emulates a database management system (DBMS) implementing that data model. It allows the user to play various roles associated with a DBMS, such as database definer and end user.This paper gives an overview of the DMP and discusses its foundations, namely, Positional Set Notation and a Positional Set Processor. It traces an example showing how the DMP has been used to model the relational data model. (Hierarchical and network models have also been implemented on the DMP.) Future applications of the DMP are considered.
- Research Article
- 10.32447/20784643.31.2025.07
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Research Article
- 10.32447/20784643.31.2025.16
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Research Article
- 10.32447/20784643.31.2025.02
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Research Article
- 10.32447/20784643.31.2025.03
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Research Article
- 10.32447/20784643.31.2025.22
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Research Article
- 10.32447/20784643.31.2025.19
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Journal Issue
- 10.32447/20784643.31.2025.00
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Research Article
- 10.32447/20784643.31.2025.04
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Research Article
- 10.32447/20784643.31.2025.13
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Research Article
- 10.32447/20784643.31.2025.01
- Jan 1, 2025
- Bulletin of Lviv State University of Life Safety
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.