ABSTRACT

Wireless sensor networks (WSNs) are self-configurable and infrastructure-less wireless networks that have nodes consisting of a wireless transceiver, a microcontroller, and an energy source assembled as one unit (Akyildiz et al. 2002). The sensor nodes in a WSN are resource constrained in terms of computational, storage, communication, and energy resources and capabilities. WSNs are energy constrained because they use battery power; therefore, minimizing energy consumption is of prime importance. Consequently, a better approach is to put the WSN node in sleep mode when it is not sensing any physical event. Artificial learning techniques are used to minimize the energy consumption of the WSN nodes, by putting them in sleep mode intelligently. This chapter combines features of many learning algorithms such as an offline critic function and real-time learning algorithms to enhance the lifetime of WSN nodes. The purpose of a machine learning algorithm is to guess the environmental behavior where it is deployed and to update the parameters of a WSN. We mainly focus on reinforcement learning (RL) (Sutton and Barto 1988) algorithms on each WSN node. The key involvement of this method is to validate theoretically and implement reward functions and offline critic functions that will achieve the desired energy savings.