Abstract

Wireless networks, especially Sensor ones (WSN), are a promising technology to monitor and collect specific measures in any environment. Several applications have already been envisioned, in a wide range of areas such as military, commercial, emergency, biology and health care applications. A sensor is a physical component able to accomplish three tasks: identify a physical quantity, treat any such information, and transmit this information to a sink (Kumar et al., 2008; Buford et al., 2009; Olfati-Saber et al., 2007). Needs for QoS to guarantee the quality of real time services must take into account not only the static network parameters but also the dynamic ones. Therefore, QoS measures should be introduced to the network so that quality of real time services can be guaranteed. The most popular formulation of the optimal distributed routing problem in a data network is based on a multicommodity flow optimization whereby a separate objective function is minimized with respect to the types of flow subjected to multicommodity flow constraints. Given the complexity of this problem, due to the diversity of the QoS constraints, we focus our attention in this paper on bio-inspired QoS routing policies based on the Reinforcement Learning paradigm applied to Wireless Sensor Networks. Many research works focus on the optimization of the energy consumption in sensor networks, as it directly affects the network lifetime. Routing protocols were proposed to minimize energy consumption while providing the necessary coverage and connectivity for the nodes to send data to the sink. Other routing protocols have also been proposed in WSN to improve other QoS constraints such as delay. The problem here is that the complexity of routing protocols increases dramatically with the integration of more than one QoS parameter. Indeed, determining a QoS route that satisfies two or more non-correlated constraints (for example, delay and bandwidth) is an NPcomplete problem (Mellouk et al., 2007), because the Multi-Constrained Optimal path problem cannot be solved in polynomial time. Therefore, research focus has shifted to the development of pseudopolynomial time algorithms, heuristics, and approximation algorithms for multi-constrained QoS paths. In this chapter, we present an accurate description of the current state-of-the-art and give an overview of our work in the use of reinforcement learning concepts focused on Wireless Sensor Networks. We focus our attention by developing systems based on this paradigm called AMDR and EDAR. Basically, these inductive approaches selects routes based on flow

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call