Abstract

Embedded sensors of a Body Sensor Network need to efficiently utilize their energy resources to operate for an extended amount of time. A Markov Decision Process (MDP) framework has been used to obtain a globally optimal policy that coordinated the sampling of multiple sensors to achieve high efficiency in such sensor networks. However, storing the coordinated sampling policy table requires a large amount of memory which may not be available at the embedded sensors. Computing a compact representation of the MDP global policy will be useful for such sensor nodes. In this paper we show that a decision tree-based learning of a compact representation is feasible with little loss in performance. The global optimal policy is computed offline using the MDP framework and this is then used as training data in a decision tree learner. Our simulation results show that both unpruned and high confidence-pruned decision trees provide an error rate of less than 1% while significantly reducing the memory requirements. Ensembles of lower-confidence trees are capable of perfect representation with only small increase in classifier size compared to individual pruned trees.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call