Secure Distributed Data Aggregation
Secure Distributed Data Aggregation
- Research Article
3
- 10.1109/jlt.2022.3232840
- Apr 15, 2023
- Journal of Lightwave Technology
With the explosion of geo-distributed data, the huge treasures hidden in them are waiting to be explored to obtain valuable insights. This results in the need for an effective geo- distributed data analysis method. The traditional approach to geo-distributed data analytics is to gather all the required data into a single edge datacenter (edge DC) through one transmission and aggregation (centralized data aggregation). However, as the volume of data grows exponentially, the centralized data aggregation scheme becomes inefficient or infeasible due to the limitations of the computing and network resources. In this paper, we propose the geo-distributed data aggregation scheme in edge compute first networking (CFN) with joint consideration of computation and communication resources. The proposed scheme optimizes two objectives: the first is to minimize the job completion time (JCT) by selecting cluster centers, dividing clusters and provisioning lightpaths; the second objective is to reduce bandwidth consumption by reallocating routing and frequency slots based on JCT. To achieve the two objectives, we first formulate the optimization problem of multi-stage geo- distributed data aggregation as a linear programming (LP) model. To tackle the computational complexity issue of the LP model, a multi-stage geo-distributed data aggregation algorithm jointly with computation and communication resources (MGDD-CC) is proposed. Simulation results show that the proposed scheme can reduce JCT, alleviate the competition for bandwidth resources and is more suitable for scenarios with better data aggregation effects and larger quantities of geo-distributed data.
- Research Article
7
- 10.25212/lfu.qzj.2.2.22
- Apr 15, 2017
- Qalaai Zanist Scientific Journal
In Wireless Sensor Networks (WSN), the deployed sensor nodes can sense the same measures from the monitored area and forward these redundant measures to the sink node. Although redundant measures provide better accuracy but consume a lot of energy during the communication and the processing at the node and the sink, and thus decrease the lifetime of the WSN. Therefore, the elimination of redundant measures and reducing the communication cost are considered as essential characteristics during design the WSNs. In this article, a Distributed Data Aggregation (DiDA) protocol for prolonging the lifetime of WSNs is suggested. DiDA protocol is an energy efficient approach for a clustered network. DiDA works into cycles and each cycle aggregates and reduces data dimensionality by using an Adaptive Piecewise Constant Approximation (APCA) method. DiDA was successfully evaluated using OMNeT++ network simulator and based on sensed data of a real sensor network. Percentage of Sent data to the Cluster Head (CH), data accuracy, and energy consumption are the performance metrics applied to assess the effectiveness of the DiDA protocol. The conducted simulation results show that the proposed DiDA protocol decreases the consumed energy and extending the network lifetime, in comparison with a method without using data aggregation technique, whilst keeping the sensed data quality at the sink node
- Conference Article
48
- 10.1109/infcom.2007.196
- Jan 1, 2007
We consider the scenario of distributed data aggregation in wireless sensor networks, where each sensor can obtain and estimate the information of the whole sensing field through local data exchange and aggregation. The intrinsic trade-off between energy and delay in aggregation operations imposes a crucial question on nodes to decide optimal instants for forwarding their samples. The samples could be composed of the information from their own sensor readings or an aggregation of information with other samples forwarded from neighboring nodes. By considering the randomness of the sample arrival instants and the uncertainty of the availability of the multiaccess communication channel due to the asynchronous nature of information exchange among neighboring nodes, we propose a decision process model to analyze this problem and determine the optimal decision policies at nodes with local information. We show that, once the statistics of the sample arrival and the availability of the channel satisfy certain conditions, there exist optimal control-limit type policies which are easy to implement in practice. In the case that the required conditions are not satisfied, we provide two learning algorithms to solve a finite-state approximation model of the decision problem. Simulations on a practical distributed data aggregation scenario demonstrate the effectiveness of the developed policies, which can also achieve a desired energy-delay tradeoff.
- Research Article
46
- 10.1109/tnet.2008.2011644
- Oct 1, 2009
- IEEE/ACM Transactions on Networking
The scenario of distributed data aggregation in wireless sensor networks is considered, where sensors can obtain and estimate the information of the whole sensing field through local data exchange and aggregation. An intrinsic tradeoff between energy and aggregation delay is identified, where nodes must decide optimal instants for forwarding samples. The samples could be from a node's own sensor readings or an aggregation with samples forwarded from neighboring nodes. By considering the randomness of the sample arrival instants and the uncertainty of the availability of the multiaccess communication channel, a sequential decision process model is proposed to analyze this problem and determine optimal decision policies with local information. It is shown that, once the statistics of the sample arrival and the availability of the channel satisfy certain conditions, there exist optimal control-limit-type policies that are easy to implement in practice. In the case that the required conditions are not satisfied, the performance loss of using the proposed control-limit-type policies is characterized. In general cases, a finite-state approximation is proposed and two on-line algorithms are provided to solve it. Practical distributed data aggregation simulations demonstrate the effectiveness of the developed policies, which also achieve a desired energy-delay tradeoff.
- Conference Article
7
- 10.1109/wcnc49053.2021.9417576
- Mar 29, 2021
Distributed data aggregation is a critical design aspect in future Internet-of-Things (IoT) networks. Over-the-air computation (AirComp) is capable of achieving ultra-fast data aggregation by exploiting the superposition property of wireless channel. However, the performance of AirComp, measured by the mean-squared-error (MSE), is generally restricted by the unfavorable channel conditions and relies on the availability of perfect channel state information (CSI). In this paper, we propose to use reconfigurable intelligent surface (RIS) to assist the wireless data aggregation in IoT networks via AirComp in the presence of imperfect CSI. By taking into account the constraints of the transmit power at the devices and the unit modulus of the RIS, we formulate an optimization problem to jointly optimize the transmit power of IoT devices, the beamforming vector at the access point, and the phase-shift matrix at the RIS under the expectation-based channel uncertainty model. We present an alternating optimization method to solve this nonconvex problem. In each iteration, the transmit power and the receive beamformer are updated according to Karush-Kuhn-Tucker conditions and a closed-form solution, respectively. Moreover, we also develop a difference-of-convex algorithm to tackle the nonconvex rank-one constraint in the problem of optimizing the phase-shift matrix. Simulation results illustrate the robustness of the proposed algorithm in terms of minimizing the AirComp distortion.
- Research Article
40
- 10.1109/tnet.2012.2221165
- Aug 1, 2013
- IEEE/ACM Transactions on Networking
Wireless sensor networks (WSNs) are more likely to be d-pistributed asynchronous systems. In this paper, we investigate the achievable data collection capacity of realistic distributed asynchronous WSNs. Our main contributions include five aspects. First, to avoid data transmission interference, we derive an ℜ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> -proper carrier-sensing range (ℜ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> -PCR) under the generalized physical interference model, where ℜ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> is the satisfied threshold of data receiving rate. Taking ℜ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> -PCR as its carrier-sensing range, any sensor node can initiate a data transmission with a guaranteed data receiving rate. Second, based on ℜ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> -PCR, we propose a Distributed Data Collection (DDC) algorithm with fairness consideration. Theoretical analysis of DDC surprisingly shows that its achievable network capacity is order-optimal and independent of network size. Thus, DDC is scalable. Third, we discuss how to apply ℜ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> -PCR to the distributed data aggregation problem and propose a Distributed Data Aggregation (DDA) algorithm. The delay performance of DDA is also analyzed. Fourth, to be more general, we study the delay and capacity of DDC and DDA under the Poisson node distribution model. The analysis demonstrates that DDC is also scalable and order-optimal under the Poisson distribution model. Finally, we conduct extensive simulations to validate the performance of DDC and DDA.
- Research Article
9
- 10.1007/s10878-012-9504-9
- May 26, 2012
- Journal of Combinatorial Optimization
This paper focuses on the distributed data aggregation collision-free scheduling problem, which is one of very important issues in wireless sensor networks. Bo et al. (Proc. IEEE INFOCOM, 2009) proposed an approximate distributed algorithm for the problem and Xu et al. (Proc. ACM FOWANC, 2009) proposed a centralized algorithm and its distributed implementation to generate a collision-free scheduling for the problem, which are the only two existing distributed algorithms. Unfortunately, there are a few mistakes in their performance analysis in Bo et al. (Proc. IEEE INFOCOM, 2009), and the distributed algorithm can not get the same latency as the centralized algorithm because the distributed implementation was not an accurate implementation of the centralized algorithm (Xu et al. in Proc. ACM FOWANC, 2009). According to those, we propose an improved distributed algorithm to generate a collision-free schedule for data aggregation in wireless sensor networks. Not an arbitrary tree in Bo et al. (Proc. IEEE INFOCOM, 2009) but a breadth first search tree (BFS) rooted at the sink node is adopted, the bounded latency 61R+5Δ?67 of the schedule is obtained, where R is the radius of the network with respect to the sink node and Δ is the maximum node degree. We also correct the latency bound of the schedule in Bo et al. (Proc. IEEE INFOCOM, 2009) as 61D+5Δ?67, where D is a diameter of the network and prove that our algorithm is more efficient than the algorithm (Bo et al. in Proc. IEEE INFOCOM, 2009). We also give a latency bound for the distributed implementation in Xu et al. (Proc. ACM FOWANC, 2009).
- Research Article
70
- 10.1109/tvt.2010.2042186
- Jan 1, 2010
- IEEE Transactions on Vehicular Technology
In this paper, we study the major problems in applying Slepian-Wolf coding for data aggregation in cluster-based wireless sensor networks (WSNs). We first consider the clustered Slepian-Wolf coding (CSWC) problem, which aims at selecting a set of disjoint potential clusters to cover the whole network such that the global compression gain of Slepian-Wolf coding is maximized, and propose a distributed optimal-compression clustering (DOC) protocol to solve the problem. Under a cluster hierarchy constructed by the DOC protocol, we then consider the optimal intracluster rate-allocation problem. We prove that there exists an optimization algorithm that can find an optimal rate allocation within each cluster to minimize the intracluster communication cost and present an intracluster coding protocol to locally perform Slepian-Wolf coding within a single cluster. Furthermore, we propose a low-complexity joint-coding scheme that combines CSWC with intercluster explicit entropy coding to further reduce data redundancy caused by the possible spatial correlation between different clusters.
- Conference Article
1
- 10.1109/icra.2011.5979910
- May 1, 2011
In this work the data aggregation problem for a multi-agent system within the framework of Theory of Evidence is investigated. In the proposed scenario, agents are assumed to be independent reliable sources which collect data and collaborate to reach a common knowledge. In particular, each agent is supposed to provide a set of observations which does not change over time. A protocol for distributed data aggregation for graph-like network topologies is designed. Experimental results with a sensor network have been carried out to corroborate the theoretical results and the feasibility of the proposed approach.
- Research Article
50
- 10.1109/lcomm.2003.815663
- Aug 1, 2003
- IEEE Communications Letters
DADMA is a distributed data aggregation and dilution technique for sensor networks where nodes aggregate/dilute sensed data by following the rules given in an SQL statement. Our test results show that DADMA reduces the number of transmitted packets 60% on the average.
- Conference Article
- 10.5220/0005991604400445
- Jan 1, 2016
Data aggregation in wireless sensor network is implemented to reduce the communication overhead and to reduce bandwidth utilization. Data confidentiality requires the sensor node to transmit the data in a secure manner so that the adversary is unable to read the data or transmit false data even if it compromises some of the sensor nodes or aggregation node. In this paper a distributed aggregation protocol using homomorphic trapdoor permutation is proposed. This protocol distributes the responsibility of key generation , aggregation and verification to different nodes to reduce the overall power consumption of the sensor network. The peer verification scheme is also proposed as a part of the protocol. Peer verification ensures the authentication of the data and sender node in the network, by at least k peer nodes. Security of the proposed protocol is analyzed against passive and active adversary model.
- Conference Article
9
- 10.1109/wcnc.2014.6952601
- Apr 1, 2014
Machine-to-machine (M2M) communications have emerged as a flourishing technology for next-generation communications, and are undergoing rapid development while inspiring numerous applications. However, unique features of M2M communications, such as the massive number of machine type devices (MTD) and delay sensitive applications require specific considerations. To enhance the communication efficiency with delay sensitive short messages, a key strategy is to utilize data aggregation. To facilitate an efficient distributed data aggregation among MTDs with different urgency levels, we propose a game theoretic mechanism based on the coalitional game. Through the proposed algorithm, MTDs autonomously collaborate and self-organize into disjoint independent and stable coalitions, and send their data through a coalition head known as the aggregator. Within each coalition, the utility of the users is defined in such a way that maximum cooperation is compelled. Finally, we discuss the stability of the resulting network structure, and analyse the performance of the proposed scheme.
- Book Chapter
2
- 10.1007/978-3-030-59016-1_67
- Jan 1, 2020
In the past decades, dynamic sensor networks have played a conspicuously more important role in many real-life areas, including disaster relief, environment monitoring, public safety and so on, to rapidly collect information from the environment and help people to make the decision. Meanwhile, due to the widespread implementation of dynamic sensor networks, there exists an enormous demand on designing suitable models and efficient algorithms for fundamental operations in dynamic sensor networks, such as the data aggregation from mobile sensors to the base station. In this paper, we firstly present a general dynamic model to comprehensively depict most of the dynamic phenomena in sensor networks. Then, based on the proposed dynamic model, an efficient distributed data aggregation algorithm is proposed to aggregate k messages from sensors to the base station within O(k) time steps in expectation and \(O(k+\log n)\) time steps with high probability. Rigid theoretical analysis and extensive simulations are presented to verify the efficacy of our proposed algorithm.
- Conference Article
38
- 10.1109/icc.2007.596
- Jun 1, 2007
Slepian-Wolf coding is a promising distributed source coding technique that can completely remove the data redundancy caused by the spatially correlated observations in wireless sensor networks (WSNs). In this paper, we study the major problems in applying Slepian-Wolf coding for data aggregation in cluster-based WSNs with an objective to optimize data compression so that the total amount of data in the whole network is minimized. We first consider the clustered Slepian-Wolf coding problem, which aims at selecting a set of disjoint potential clusters to cover the whole network such that the global compression gain of Slepian-Wolf coding is maximized. To solve this problem, a distributed optimal-compression clustering protocol (DOC2) is proposed. Under the optimal cluster hierarchy constructed by DOC2, we then consider the optimal intra-cluster rate allocation problem and present an approximation algorithm that can find an optimal rate allocation within each cluster to minimize the intra-cluster communication cost. With the optimal intra-cluster rate allocation found, the procedures to perform Slepian-Wolf coding within a cluster are also presented.
- Conference Article
- 10.1117/12.2606937
- Nov 30, 2021
With the development of edge computing, the edge data center of optical interconnection has attracted extensive attention due to its storage, computing, and large bandwidth capabilities. Caching popular content in the edge data center has been proposed to reduce network load and latency. Due to the limited capability of a single edge data center, multiple edge data centers are required to cooperate to meet specific business requirements. Data in the edge computing optical network is distributed according to area deployment. Coordination between data centers will cause resource waste and delay due to the asynchronous transmission time. We designed and implemented a latency-controlled distributed content caching and data aggregation experiment in the MEC-empowered metro optical networks. The system can not only realize dynamic network configuration and service deployment but also reduce the average delay and improve the resources utilization.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.