Abstract

This paper describes the use of artificial neural networks (ANNs) to model cooling energy use in commercial buildings, and compares the attributes of ANNs and least-squares (LS) regression modeling techniques. The neuro-biological roots of ANN models and the fundamentals of the backpropagation algorithm are described. The effects of differing values of model parameters (gain, bias and learning rate) and network architectures (three and five layer networks) on the rate of convergence and prediction accuracy of ANN models are discussed. Finally, the attributes of ANN and least-squares regression models are compared in a case-study example using measured energy use data from a large commercial building. The results draw attention to the importance of parameter selection when using ANN models, and indicate that multiple hidden layers in ANNs appear to be necessary when modeling the non-linear energy use typical of commercial buildings. Introduction The ability to accurately predict the behavior of energy using systems in commercial buildings is increasingly valuable. Predicted energy use can be compared to observed energy use in order to identify operational problems and measure the effectiveness of energy conservation retrofits1. Energy use forecasts can also be incorporated into control procedures which enhance comfort conditions and reduce energy and demand expenses2v3. Although engineering models can estimate building energy use, difficulties inherent in the calibration procedure often limit the accuracy of the resulting predictions. Empirical models, such as LS regression models and ANNs, increase the accuracy and reducing the modeling time required for energy use forecasts. This paper describes the use of ANNs to model cooling energy use in a commercial building and compares the attributes of ANN and LS models. The Neurobioiogical Model and ANNs The human brain is a highly complex organ comprised of some lo1 basic units called neurons. Each neuron is connected to about lo4 other neurons. Because of this highly interconnected nature, the architecture of the brain is referred to as being massively parallel or massively interconnected. Each neuron consists of a soma, dendrites, axons and synapses (Figure 1). The soma is the body of the neuron. Dendrites and axons extend from the soma and branch out like roots. If a neuron receives enough active inputs along its dendrites, it fires and sends a voltage spike down the axons. Axons are connected to other dendrites and somas at synapses. When a neuron fires, chemicals called neurotransmitters are diffused across the synapses. Learning is thought to occur at the synapses where the neurotransmitters and neuroreceivers vary to reinforce good connections and discourage bad connections4. Figure I . Schematic representation of a neuron ANNs attempt to mimic parts of the architecture and functionality of the brain. Neurons are simulated in ANNs as connected nodes. The distributed, parallel processing structure of the brain is simulated by arranging the nodes in layers such that each node is connected to all of the nodes in the adjacent layers. In a manner analogous to the response of a neuron, each node sums the inputs it receives and transmits an output signal to the other nodes to which it is connected. The output signal of each ANN node is multiplied by a weight which is varied during the learning process, just as synaptic neurotransmitters and receivers are varied in the human learning process. The distributed, parallel architecture of the brain is well suited to learning and pattern recognition tasks such as vision, in which several processes and Copyright c 1994 by John K. Kissock. Published by the American Institute of Aeronautics and Astronautics, Inc. with permission. 1290 comparisons are made simultaneously. In contrast, the digital computer is a serial device in which a single central processing unit (CPU) sequentially processes a set of instructions. ANN algorithms simulate the brain's parallel architecture in the serial environment of the digital computer, with the hope of mimicking parts of the brain's amazing capacity for learning and pattern recognition. Generalized Delta, Back-Propagation Algorithm Many different types of ANNs have been devised to accomplish a wide variety of tasks including recognition of handwritten English words, speech recognition and image compression5. The ANNs examined here employ a fully-connected, feedforward architecture (Figure 2). Fully-connected means that each node is connected to all of the nodes in the adjacent layers (or columns of nodes). Feedforward indicates that information is passed in a single direction from the input to the output nodes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.