Abstract
CFD models of data centers often use two equation turbulence models such as the k-e model. These models are based on closure coefficients or turbulence model constants determined from a combination of scaling/dimensional analysis and experimental measurements of flows in simple configurations. The simple configurations used to derive the turbulence model constants are often two dimensional and do not have many of the complex flow characteristics found in engineering flows. Such models perform poorly, especially in flows with large pressure gradients, swirl and strong three dimensionality, as in the case of data centers. This study attempts to use machine learning algorithms to optimize the model constants of the k-e turbulence model for a data center by comparing simulated data with experimentally measured temperature values. For a given set of turbulence constants, we determine the Root Mean Square ‘error’ in the model, defined as the difference between experimentally measured temperature from a data center test cell and CFD calculations using the k-e model. An artificial neural network (ANN) based method for parameter identification is then used to find the optimal values for turbulence constants such that the error is minimized. The optimum turbulence model constants obtained by our study results in lowering the RMS error by 25% and absolute average error by 35% compared to the error obtained by using standard k-e model constants.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.