Abstract

Cloud cover estimation from images taken by sky-facing cameras can be an important input for analyzing current weather conditions and estimating photovoltaic power generation. The constant change in position, shape, and density of clouds, however, makes the development of a robust computational method for cloud cover estimation challenging. Accurately determining the edge of clouds and hence the separation between clouds and clear sky is difficult and often impossible. Toward determining cloud cover for estimating photovoltaic output, we propose using machine learning methods for cloud segmentation. We compare several methods including a classical regression model, deep learning methods, and boosting methods that combine results from the other machine learning models. To train each of the machine learning models with various sky conditions, we supplemented the existing Singapore whole sky imaging segmentation database with hazy and overcast images collected by a camera-equipped Waggle sensor node. We found that the U-Net architecture, one of the deep neural networks we utilized, segmented cloud pixels most accurately. However, the accuracy of segmenting cloud pixels did not guarantee high accuracy of estimating solar irradiance. We confirmed that the cloud cover ratio is directly related to solar irradiance. Additionally, we confirmed that solar irradiance and solar power output are closely related; hence, by predicting solar irradiance, we can estimate solar power output. This study demonstrates that sky-facing cameras with machine learning methods can be used to estimate solar power output. This ground-based approach provides an inexpensive way to understand solar irradiance and estimate production from photovoltaic solar facilities.

Highlights

  • Clouds have been widely studied in a variety of fields

  • We found that the U-shaped network (U-Net) deep learning model was especially sensitive to the identification of thin and brightly colored cloud pixels when compared to the other models (Figure 4d)

  • Because the AdaBoost models is an ensemble of segmentation results from four machine learning models, we can hypothesize that the model concluded that the class classification results from fully convolutional network (FCN) were the inverse of the ground truth with high probability when thin clouds cover the sky

Read more

Summary

Introduction

Clouds have been widely studied in a variety of fields. The shape and distribution of clouds are important for modeling climate and weather, understanding interactions between aerosols and clouds, and developing environmental forecasting models including radiation and cloud properties [1,2]. Detecting and understanding cloud cover have been investigated for estimating and forecasting solar irradiance and predicting photovoltaic power generation [3]. Across all these problem domains, the magnitude of cloud coverage is important, along with factors such as wind direction, wind speed, and temperature. Utilizing deep learning methods for cloud segmentation using ground-based images remains largely underexplored. We utilize three deep neural networks: a fully convolutional network (FCN) [26], a U-shaped network (U-Net) [27], and DeepLab v3 [28] for estimating cloud cover In addition to these three deep neural networks, we utilize color-based segmentation and a boosting method.

Methodology
Color-Based Segmentation
Semantic Segmentation Neural Networks
Fully Convolutional Network
DeepLab
Ensemble Method
Datasets
Singapore Whole Sky Imaging Segmentation Database
Hybrid Thresholding Algorithm Database
Waggle Cloud Dataset
Solar Irradiance and Solar Power Product Measure
Model Validation
Solar Irradiance Estimation
Cloud Cover Estimation
Solar Power Product Estimation
Findings
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.